Thank You, Microsoft For Giving Me a Reason (Minecraft) to Teach My Children to Hate You

If you haven’t heard, Microsoft is buying Mojang, the creators of the insanely popular Minecraft open-world creation game.  My children love Minecraft.  They fight over who gets what device, and then when settled spend hours playing with each other an an enormous creative world.  These are children who will not get along for anything.  They love Minecraft.  Sometimes all they talk about is Minecraft.  But it gets them talking.  To each other.  For that reason, if for no other, I love Minecraft.  And yet the game itself is a ton of fun (which I discovered for myself during my annual vacation this last summer).

So the news that Microsoft is buying Mojang?  Horrible.  I come from the generation that couldn’t spell the name without dollar signs.  Micro$oft was my generation’s Babylon the Great, Whore of All the Earth.  The Enemy.  The Man.  The primary justification for the existence of Linux.  And then, about the time my kids came along, Microsoft just wasn’t that important anymore.  They had much more competition, and were even struggling to keep relevant in a new age.  I thought that the undying hate might actually die with me, and my children would grow up in a world of peace.  But today that all ends.  Children, meet the enemy from Redmond.

Disclaimer Note: I have grown up some since my youth, and in spite the tone of my comments above (which carries more of the anger of my children than of my own), am not an outright opponent of Microsoft.  I use Windows (reluctantly) at work, and even have a Windows partition somewhere on my laptop for those few things (Adobe Illustrator) that just won’t run in Linux.  If I had to say, these days I am rather ambivalent about the whole thing.  But Mr. Satya Nadella is making quite a statement buying Mojang, and has the opportunity to absolutely piss off an entire generation of children if Microsoft gets this wrong (like they seem to have largely done with acquisitions in the past).

Copyright Note: The featured image is posted all around as a Creative Commons image, but I have not yet been able to find an original source or author.  If you know where it came from, please point me in the right direction.  Thanks.

Today is the Last Day to Tell the FCC How You Feel About Net Neutrality

Today is the last day to submit online comments to the FCC on Net neutrality!  I have intended for a couple of weeks to submit comments to the FCC on how I feel about Net neutrality, and its upcoming rule-making upholding the ability of Internet Service Providers to slow down access to services that don’t pay extra fees to stay fast.  What does this mean?  It means that without a ruling in favor of Net neutrality, even though I pay a lot every month for high-speed internet access that is an order of magnitude or two faster than the national average, if I site I like doesn’t pay the V*****n [name of big-name ISP censored because my legal defense fund is small] tax, my access to it will be slow.  Even if they are paying their ISP for high-speed access as well.  Make sure you submit comments.

Continue reading “Today is the Last Day to Tell the FCC How You Feel About Net Neutrality”

Raising a Glass in Memory of Geoworks

Having just passed the 25th anniversary of Microsoft Windows, there have been several reviews  (like this one, and this one) of long-lost Windows competitors.  Certainly in our family the only time we used Windows was when we needed sound, because our Turtle Beach sound card was Windows only.  So Windows 3.0 was used almost exclusively for writing music and playing computer games (but only Windows games – the soundcard didn’t work in DOS, and most of the best games were DOS only – Warlords IIIThe HumansLemmingsThe Lost VikingsUltima UnderworldCivilization, and a whole host of other games).  What did we use instead?  Geoworks.  It wasn’t the best operating system in the world, but its competition was Windows 3 (so not an incredibly high bar to meet), and the Geoworks dot matrix print drivers were amazing.  It could drive my 9-pin printer like it was a $2000 laser (2-10 minutes per page).

To learn more about Geoworks:

Image from ToastyTech.com

Stratigraphic Photogrammetry: Part 1 – Calculations for Photometrics on the Olympus C-750

For the last few months, I have been working on a methodology for using terrestrial photogrammetry techniques in capturing stratigraphic profiles. The original idea for this came on the Wenas Mammoth dig (which I wrote about on 10 August 2005), when we were not sure, at one point, if our trench, cut into a wall of loess, would survive over the weekend. At that point, we were only about halfway through our stratigraphic analysis, and a colapse of the trench over the weekend would have set us back very far. I took systematic close-up photographs of the trench wall, hoping to capture some of the features of the stratigraphy in case we lost the wall.

The trench didn’t fall over the weekend, which was a very good thing: in my ignorance, I made many mistakes taking my photographs, and they were completely unusable. Since then, I have been working on the techniques to make measureable photographs of a wall. Because I am using a digital camera to make digital images, the first step in the methodology is to calculate image size, scale, coverage, resolution and spacing for the camera. Because the focal length and sensor size, as well as the pixel resolution, of each digital camera is different, the calculations have to be made for each model of camera (although cameras with the exact same focal length, pixel resolution, and sensor size would have the same calculations). I have an Olympus C-750, which has a focal length of 7.8mm, and a sensor size of 5.67mm x 4.39mm. I am including here a table of all the different calculations for my camera, assuming a screen and print resolution of 200dpi, which would give a photo size of 8″ x 10″.

Update (11/12/05): Going out in the field today to actually try making metric photographs with my camera, using the calculations from Table 1.1 below, I realised that the calculations I made were actually for the Olympus C-140 and C-160 3.2 MP UltraZoom cameras, and that the model that I actually own is the Olympus C-150 4MP. The origianl table of calculations is of no use to me because I don’t own those cameras, but I will leave it here on the chance that someone might be able to make some use of it. I have added the calculations of the Olympus C-150, the camera that I actually own, as Table 1.2, below.

 

Update (11/14/05): I have posted Part 2 of my discussion of stratigraphic photogrammetry using a consumer-level digital camera.

 

Table 1.1 Coverage, Resolution, and Scale Calculations for the Olympus C-740/760 3.2MP Digital Cameras
Distance from Wall DV Coverage (m) DH Coverage (m) Photoscale (200dpi) Photo Resolution (mm/px) Stereo-pair H-spacing Stereo-pair V-spacing
0.50 0.28 0.37
1 /
1.4 0.18 0.15 0.11
1.00 0.56 0.74
1 /
2.8 0.36 0.30 0.22
1.50 0.84 1.11
1 /
4.3 0.54 0.45 0.33
2.00 1.13 1.48
1 /
5.7 0.72 0.60 0.44
2.50 1.41 1.85
1 /
7.1 0.90 0.75 0.55
3.00 1.69 2.22
1 /
8.5 1.08 0.90 0.66
3.50 1.97 2.58
1 /
9.9 1.26 1.05 0.77
4.00 2.25 2.95
1 /
11.4 1.44 1.20 0.88
4.50 2.53 3.32
1 /
12.8 1.62 1.35 0.99
5.00 2.81 3.69
1 /
14.2 1.80 1.50 1.10
5.50 3.10 4.06
1 /
15.6 1.98 1.65 1.21
6.00 3.38 4.43
1 /
17.0 2.16 1.80 1.32
6.50 3.66 4.80
1 /
18.5 2.34 1.95 1.43
7.00 3.94 5.17
1 /
19.9 2.52 2.10 1.54
7.50 4.22 5.54
1 /
21.3 2.70 2.25 1.65
8.00 4.50 5.91
1 /
22.7 2.88 2.40 1.76
8.50 4.78 6.28
1 /
24.1 3.06 2.55 1.87
9.00 5.07 6.65
1 /
25.6 3.25 2.70 1.98
9.50 5.35 7.02
1 /
27.0 3.43 2.85 2.09
10.00 5.63 7.38
1 /
28.4 3.61 3.00 2.20
Table 1.2 Coverage, Resolution, and Scale Calculations for the Olympus C-750 4MP Digital Camera
Distance from Wall DV Coverage (m) DH Coverage (m) Photoscale (200dpi) Photo Resolution (mm/px) Stereo-pair H-spacing Stereo-pair V-spacing
0.50 0.35 0.46
1 /
1.6 0.22 0.15 0.11
1.00 0.70 0.91
1 /
3.1 0.45 0.30 0.22
1.50 1.05 1.37
1 /
4.7 0.67 0.45 0.33
2.00 1.39 1.83
1 /
6.3 0.89 0.60 0.44
2.50 1.74 2.29
1 /
7.9 1.12 0.75 0.55
3.00 2.09 2.74
1 /
9.4 1.34 0.90 0.66
3.50 2.44 3.20
1 /
11.0 1.56 1.05 0.77
4.00 2.79 3.66
1 /
12.6 1.79 1.20 0.88
4.50 3.14 4.11
1 /
14.2 2.01 1.35 0.99
5.00 3.48 4.57
1 /
15.7 2.23 1.50 1.10
5.50 3.83 5.03
1 /
17.3 2.46 1.65 1.21
6.00 4.18 5.49
1 /
18.9 2.68 1.80 1.32
6.50 4.53 5.94
1 /
20.5 2.90 1.95 1.43
7.00 4.88 6.40
1 /
22.0 3.13 2.10 1.54
7.50 5.23 6.86
1 /
23.6 3.35 2.25 1.65
8.00 5.57 7.31
1 /
25.2 3.57 2.40 1.76
8.50 5.92 7.77
1 /
26.7 3.79 2.55 1.87
9.00 6.27 8.23
1 /
28.3 4.02 2.70 1.98
9.50 6.62 8.69
1 /
29.9 4.24 2.85 2.09
10.00 6.97 9.14
1 /
31.5 4.46 3.00 2.20

ALTERNATE CLUSTER SYSTEM FOR THE PSC?

I was looking on the Cluster Foundry at Sourceforge and came across a link to the Condor Project at the University of Wisconson at Madison. The Condor Project runs across multiple systems. They have versions for Windows and Linux.

We have been looking at using openMosix as the cluster component in Turtlshel Linux for the PSC, but openMosix hasn’t been ported to Windows yet. If we did use Condor instead, users could use a Windows machine as the master computer if they wanted.

LITERATURE REVIEW OF “PERSONAL SUPER COMPUTER”

Actually I just Googled it, but these days, isn’t Googling a lot like perfoming a literature review? Ok, maybe not. I wanted to see if the concept of the PSC was out there already in the world. What I found was partly what I expected: no-name companies trying to pass off flashy, slightly-faster-than-off-the-shelf computers as supercomputers. Some of what I found, however, surprised me. The idea of a personal super computer is out there. It seems a bit of holy grail in the sense that although there is a bunch of chatter and hoping for a machine cheap enough and yet powerful enough to call a PSC, the devices just aren’t here yet.

Anyway, this is what I’ve found so far. I’ll add to the list as I find more.

1. An article on a new parallel co-processor from ClearSpeed Technology. Not quite like what we have in mind, but this looks like another step towards bringing Super Computing into the family room. But how much did they say it costs? Ouch.

http://www.ce.org/publications/vision/2004/janfeb/p52.asp?bc=dept&department_id=10

2. An interesting page on super-computers in general and their uses.

http://www.thocp.net/hardware/supercomputers.htm

3. A page on grassroots science, but it makes mention of PSCs playing an important role in grassroots research.

http://www.sheldrake.org/evolutionary_mind/grassroots_science.html

4. An article on extreme computing that seems to cover some of the same issues we are facing, and also has mention of an Intel Personal Super Computer that was introduced in the 1980’s.

http://www.idi.ntnu.no/~elster/pubs/sims02-elster.pdf

5. The Gravitor project bills itself as an Almost Super Computer.

http://obswww.unige.ch/~pfennige/gravitor/gravitor_e.html

RESOURCES FOR BUILDING YOUR OWN LINUX DISTRIBUTION

This is just a quick list of links to the resources I have found so far for building your own Linux distribution.

Info from VIA Arena (This will be a big base for our work on Turtlshel Linux)

http://c2.com/cgi/wiki?LinuxFromScratch

http://linux.box.sk/newsread.php?newsid=767

http://www.superant.com/cgi-bin/smalllinux.pl?SmallLinuxLikeProjects

http://www.lesbell.com.au/Home.nsf/0/09212fde3c0c7597ca256ce70022e91c?OpenDocument

http://www.tectonic.co.za/default.php?id=297

CLUSTER: POINTS-TO-THINK-ABOUT

Some thoughts from an article on the Stone Souper Computer at Oak Ridge National Labs.

  • All nodes of the cluster should be isolated on a private LAN with their own Ethernet hub or switch.
  • The first or master node in cluster should have an additional ethernet card to connect it to the normal, routed network a well as to the private network.
  • To increase inter-node bandwidth, additional NICs can be installed in each node.
  • The private LAN must have a block of IPs not used on the Internet – it is usually easiest to use the Class A 10.0.0.0 address space which is reserved for non-routed networks.
  • Computational Problems to Communications Problems – happens when aproblem is devided into such small parts across so many nodes that the time required to transmit the data of the problem to the nodes in the cluster exeeds the actual CPU time to solve the problem. Fewer nodes may result in faster run times.
  • Cluster heterogeneity plays a factor. Faster CPUs must wait for slower CPUs to catch up.