How much do you know?
The University of California, Berkley School of Information Managment Systems has conduct research since 1999 to determine how much new information is created each year. This topic of study, and the results have direct implications for grassroots science. First, this is the kind of thing that armchair physicists and free-time mathematicians could and should be working on. What other topics like this could be addressed?
Second, with so much information being created every year, how do amateur and underfunded scientists stay on top of it all? With all this information out there, how do those same scientists get their results out to the world? Just a couple of thoughts.
Read the study at:
I was looking on the Cluster Foundry at Sourceforge and came across a link to the Condor Project at the University of Wisconson at Madison. The Condor Project runs across multiple systems. They have versions for Windows and Linux.
We have been looking at using openMosix as the cluster component in Turtlshel Linux for the PSC, but openMosix hasn’t been ported to Windows yet. If we did use Condor instead, users could use a Windows machine as the master computer if they wanted.
Actually I just Googled it, but these days, isn’t Googling a lot like perfoming a literature review? Ok, maybe not. I wanted to see if the concept of the PSC was out there already in the world. What I found was partly what I expected: no-name companies trying to pass off flashy, slightly-faster-than-off-the-shelf computers as supercomputers. Some of what I found, however, surprised me. The idea of a personal super computer is out there. It seems a bit of holy grail in the sense that although there is a bunch of chatter and hoping for a machine cheap enough and yet powerful enough to call a PSC, the devices just aren’t here yet.
Anyway, this is what I’ve found so far. I’ll add to the list as I find more.
1. An article on a new parallel co-processor from ClearSpeed Technology. Not quite like what we have in mind, but this looks like another step towards bringing Super Computing into the family room. But how much did they say it costs? Ouch.
2. An interesting page on super-computers in general and their uses.
3. A page on grassroots science, but it makes mention of PSCs playing an important role in grassroots research.
4. An article on extreme computing that seems to cover some of the same issues we are facing, and also has mention of an Intel Personal Super Computer that was introduced in the 1980’s.
5. The Gravitor project bills itself as an Almost Super Computer.
Some thoughts from an article on the Stone Souper Computer at Oak Ridge National Labs.
All nodes of the cluster should be isolated on a private LAN with their own Ethernet hub or switch.
The first or master node in cluster should have an additional ethernet card to connect it to the normal, routed network a well as to the private network.
To increase inter-node bandwidth, additional NICs can be installed in each node.
The private LAN must have a block of IPs not used on the Internet – it is usually easiest to use the Class A 10.0.0.0 address space which is reserved for non-routed networks.
Computational Problems to Communications Problems – happens when aproblem is devided into such small parts across so many nodes that the time required to transmit the data of the problem to the nodes in the cluster exeeds the actual CPU time to solve the problem. Fewer nodes may result in faster run times.
Cluster heterogeneity plays a factor. Faster CPUs must wait for slower CPUs to catch up.
I’ve been looking at using the VIA EPIA V10000 for the PSC. However, tonight I came across a review of the “M” 10000
. A little too much board for the PSC devices, but already I have little thoughts of sugerplums (and DV recorders, and portable workstations, and on and on) starting to spin in my head.
More to come as I make more plans.