Bush and Cheney's defense of blocking energy task force documents is terribly frustrating. Their excuse that conversations must be confidential to be candid is almost sinister. If you can't speak in public, in front of the people whose will you're supposed to represent, you shouldn't purport to value democracy. I understand that corporations and other lobbyists have a positive part to play in politics (providing very specific information, for free), but if it can't be made public, we need to know why that is. They say that they're willing to take the PR hit in order to keep the presidency powerful; but I suspect they're simply choosing a small hit from uncertain citizens over a larger hit from informed citizens.
I also find it hard to believe that will so many experts and freely usable studies on the energy economy, Enron was the very best source to consult. Why meet with Enron six (at least) times? This looks like the classic partisan research strategy: find an expert whose results match your agenda, and latch on. At least that seems to be the pattern, when it comes to arsenic, stem cells, the economy, global warming, and so forth.
We need to elect a scientist, people.
It was 75ºF today. The wind and birds have returned too early. On Friday, the weather pendulum is going to bring us a massive snowstorm.
Ever do a late-night redesign, and the wake up the next day, load up the browser, and just about puke? It looks like the Borg fought the XBox here. White text on gray ... what was I thinking? I do stand behind my sitemap and linkmap trees, done with CSS1 and little bitmap arrows. Extreeeme CSS. (Ok, I've fixed the stylesheet now.)
I'm currently reading Swarm Intellegence from the Santa Fe Institute. The book goes in great detail on both specific emergent behavior systems and the classes of problems they are suited to solve: self-assembly, task allocation, searching, and the classic traveling salesman problem.
Interestingly, one of the self-assembly simulations is little more than a modified DLA sim. Virtual wasps carry around particles and drop them when they encounter a specific configuration of other particles. DLA particles are dropped when they touch any configuration. The experiments noted in the book perform using a tridimensional grid of blocks, just as simple DLA sims do; but using the code I've already written for DLA, I'll be able to try it in a more realistic space. Stay tuned!
Version 1.0 of DLATOOL is finished. I'm glad it's over for now; I hate writing parsers.
While I was working on the manual, the first snow of this winter began to fall. Humbling observation: nature was computing aggregations of billions of randomly shaped flakes while my program was getting chunky on its 2000th simple sphere. Screw this infinitely parallel-processing universe.
I'm working on a program capable of generating almost any radial DLA structure. Users can define their own drawing and other rules in a script, which will make my own experiments easier as well. As is often the case with such programs, even the bugs are outputting beautiful things.
I've started an on-campus lab-sitting job on Thursdays. There is very little to do there other than keep a lookout and browse the web so you can bet on the DLA program coming out Thursday night.
DeskMod, one of the nicest desktop design communities ever, has come back online after shutting down last summer. This was the site whose refresh button I hovered over before finding Metafilter. It will be a nice change of pace; people don't get into shouting matches over wallpaper design.
Looks like Time Canada has mistakenly revealed the new iMac*. Given the tone of the Time article, it's likely that this was the product to be unveiled at tomorrow's MacWorld Expo.
I suspect that this product will end up as an important lesson for the industry: bad industrial design is worse than no industrial design. It's just that ugly.
I have a reoccurring programming problem that goes like this: I have a bunch of points in space, and I need to know how many points are within a certain distance of a given point.
If I store all my points in a linked list (Fig. 1), then I have to run through the list and calculate the distance to each other point, putting the results into a 2 dimensional triangular matrix (triangular since the distance from A to B is the same as from B to A). This is really slow, and gets exponentially slower as the number of points increases.
The problem is that a linked list can't tell how things relate spatially, so it has to check every point. So here's what I've done: I split a 2D space into a matrix (Fig. 2), and each cell contains a linked list of points that belong inside that cell. So when I need to find all points within a certain distance of a central point, I need only check the points in the adjacent subspace (Fig. 3). Adding and removing points is as efficient as with a linked list, but searching for close particles is much faster.
So tell me, have you seen this before? (Or do I really have to keep coding this convoluted beast?)
My best find: a gasoline doused red foam clown nose, in a gutter, at Champaign-Urbana.