Groups are trying to complete and exhaustive wildlife inventory of Geat Smoky Mountains National Park. Tops scientists are participating, but the interesting thing is that so are untrained children. There is plenty of work for anyone to do in as extensive a study as they are trying to accomplish.
I was looking on the Cluster Foundry at Sourceforge and came across a link to the Condor Project at the University of Wisconson at Madison. The Condor Project runs across multiple systems. They have versions for Windows and Linux.
We have been looking at using openMosix as the cluster component in Turtlshel Linux for the PSC, but openMosix hasn’t been ported to Windows yet. If we did use Condor instead, users could use a Windows machine as the master computer if they wanted.
Actually I just Googled it, but these days, isn’t Googling a lot like perfoming a literature review? Ok, maybe not. I wanted to see if the concept of the PSC was out there already in the world. What I found was partly what I expected: no-name companies trying to pass off flashy, slightly-faster-than-off-the-shelf computers as supercomputers. Some of what I found, however, surprised me. The idea of a personal super computer is out there. It seems a bit of holy grail in the sense that although there is a bunch of chatter and hoping for a machine cheap enough and yet powerful enough to call a PSC, the devices just aren’t here yet.
Anyway, this is what I’ve found so far. I’ll add to the list as I find more.
1. An article on a new parallel co-processor from ClearSpeed Technology. Not quite like what we have in mind, but this looks like another step towards bringing Super Computing into the family room. But how much did they say it costs? Ouch.
2. An interesting page on super-computers in general and their uses.
3. A page on grassroots science, but it makes mention of PSCs playing an important role in grassroots research.
4. An article on extreme computing that seems to cover some of the same issues we are facing, and also has mention of an Intel Personal Super Computer that was introduced in the 1980’s.
5. The Gravitor project bills itself as an Almost Super Computer.
This is just a quick list of links to the resources I have found so far for building your own Linux distribution.
I found this site while looking for something else, but it makes some points that I think are important in the grassroots sciences. The page talks about narrowing the focus of research topics until they are focused and relevant. It gives points to design research projects worth doing. A thought that I took away from this page was that we need to ask more questions that we don’t already know the answer to.
How do you calculate the processing power of a cluster?
How do you measure the respective costs of ownership of various configurations (power draw, cooling, processing power, etc.)
Some thoughts from an article on the Stone Souper Computer at Oak Ridge National Labs.
All nodes of the cluster should be isolated on a private LAN with their own Ethernet hub or switch.
The first or master node in cluster should have an additional ethernet card to connect it to the normal, routed network a well as to the private network.
To increase inter-node bandwidth, additional NICs can be installed in each node.
The private LAN must have a block of IPs not used on the Internet – it is usually easiest to use the Class A 10.0.0.0 address space which is reserved for non-routed networks.
Computational Problems to Communications Problems – happens when aproblem is devided into such small parts across so many nodes that the time required to transmit the data of the problem to the nodes in the cluster exeeds the actual CPU time to solve the problem. Fewer nodes may result in faster run times.
Cluster heterogeneity plays a factor. Faster CPUs must wait for slower CPUs to catch up.
More to come as I make more plans.