Friday, January 15, 2016

Bitcoin Reaches the End of it's Life

An on-line buddy of mine, John A. Hardy, shared this article on Medium with me today, and it is a truly fascinating read. One of the Bitcoin software developers and spokes-people, Mike Hearn, has turned his back on the Bitcoin community. This is a strong sign that the Bitcoin social experiment is truly at an end, concluding in failure.

I have tried arguing with Bitcoin proponents over the Internet, but I always ended up arguing with a script. You see they had constructed a wiki full of clever-sounding rebuttals to common criticisms of the system. So as soon as I commented, someone would reply with a link to a page in their wiki, as if to say, "Nope, you're ignorant and this link proves you wrong. Why don't you educate yourself before making outrageous claims about a system you hardly understand." This is hardly surprising as most Bitcoin fans are Ayn Rand-loving college kids from wealthy families who believe that they know a lot about computers and who believe that libertarianism and trickle-down economics actually works.

The party is over. Money is only as good as the people who believe it has value. There are a few hard-core believers, mostly those aforementioned libertarian ideologues who have no understanding how reality works.

There was a wave of hype around Bitcoin round about the year 2011-2012, and it went viral (and that's when I first heard about it). Going viral makes the value of the currency spike upwards. At that time, millions upon millions of people had at least a mild interest in it and were willing to give it a try, myself included.

But that wave of hype has died and all that's left are the hard-core libertarians ideologues. Unfortunately for them, this means their beloved money is now becoming more and more worthless. People who sold and got out early got rich, everyone else is left with worthless ones and zeros. At least with paper money you can use it to wipe your ass, Bitcoin can't even do that.

In short, it was a textbook case of a Ponzi scheme.

I have to say, it is a fascinating algorithm. The biggest problem is that cheating is prevented by making the cost of computing the algorithm more expensive than the cost of cheating, which makes it necessarily a tremendous waste of electricity and computing resources. This alone made it very problematic.

Another technical problem was that the block chain was getting to be too big, dozens of gigabytes. Everyone needs to have a copy of the entire block in order to be able to use the algorithm and participate. Although there are ways to solve the block chain size problem, and the problem of losing bitcoins due to accidentally deleting data (which is what happened in the Mt. Gox incident), or government seizure of disk drives (which is what happened with Silk Road).

I believe one proposal was to essentially flatten the old block chain, and then spin off an entirely new block chain (a new currency) that was initialized with the values of the flattened old block chain. It works sort of like splitting the value of the stock. So the old stocks become worthless, but investors receive a number of new stocks equivalent in value to the old stocks. This would lead to inflation, but at a reasonably low rate.

There were myriad other minor technical problems. I don't recall if there was ever a DDoS committed against the Bitcoin network, but that is also a problem that I don't think was ever addressed. There were clever ways for more powerful sub-networks to produce more than everyone else using a scheme called "dishonest mining," which was never addressed.

The biggest problem, the problem that killed Bitcoin, was that the software was too complicated. Only a few technocratic elites understood it well enough to control it. They paid lip service to it's open source nature; hypothetically anyone could implement their own Bitcoin software as long as it obeyed the network protocol, and the protocol was ostensibly democratic in nature. But in actual fact the protocol was decided entirely by the elite technocrats who were the maintainers of the reference implementation, and this gave them a total monopoly on the currency.

Even worse, good software has to be stupid-proof. Keeping things simple makes them stupid-proof. The cryptography involved in Bitcoin was anything but simple, and idiots who don't know how to keep their computers secure (i.e. everyone) had no good way to protect their Bitcoins from simple, old-fashioned hacks. Simple computer hacks work like picking people's pockets, except now any script-kiddie can run programs that pick pockets for them.

As most intelligent people know, few people are stupider than Ayn Rand-loving libertarian college kids from rich families who believe they know a lot about computers and that trickle-down economics actually works -- the very people who make up the vast majority of the hard-core Bitcoin user base.

So Bitcoin turned out to be yet another in a long list of libertarian social experiments that resulted in *total,* absolute failure.

Following is a quote from the article, and just for some context, the libertarian cult in the United States is powerfully influenced by the Republican Party propaganda machine, especially Fox News. For the past 10 years or so, they have been pushing propaganda that democracy is a bad thing, and that the United States was never a democracy but a republic -- even though the US is a democratic republic, but these people never bother with historical facts. So as a result of this propaganda, libertarian ideologues (especially from the US) hate democracy -- and it shows, by how thy maintain authoritarian control of their properties, like their web forums.

Quoting the article:

.....

By running [the Bitcoin] XT [update], miners could cast a vote for changing [an important technical limitation]. Once 75% of blocks were voting for the change the rules would be adjusted and bigger blocks would be allowed.

The release of Bitcoin XT somehow pushed powerful emotional buttons in a small number of people. One of them was a guy who is the admin of the bitcoin.org website and top discussion forums. He had frequently allowed discussion of outright criminal activity on the forums he controlled, on the grounds of freedom of speech. But when XT launched, he made a surprising decision. XT, he claimed, did not represent the “developer consensus” and was therefore not really Bitcoin. Voting was an abomination, he said, because:

"One of the great things about Bitcoin is its lack of democracy"

So he decided to do whatever it took to kill XT completely, starting with censorship of Bitcoin’s primary communication channels: any post that mentioned the words "Bitcoin XT" was erased from the discussion forums he controlled, XT could not be mentioned or linked to from anywhere on the official bitcoin.org website and, of course, anyone attempting to point users to other uncensored forums was also banned. Massive numbers of users were expelled from the forums and prevented from expressing their views.

As you can imagine, this enraged people. Read the comments on the announcement to get a feel for it. Eventually, some users found their way to a new uncensored forum. Reading it is a sad thing. Every day for months I have seen raging, angry posts railing against the censors, vowing that they will be defeated.

But the inability to get news about XT or the censorship itself through to users has some problematic effects. For the first time, investors have no obvious way to get a clear picture of what’s going on. Dissenting views are being systematically suppressed. Technical criticisms of what Bitcoin Core is doing are being banned, with misleading nonsense being peddled in its place. And it’s clear that many people who casually bought into Bitcoin during one of its hype cycles have no idea that the system is about to hit an artificial [software-controlled] limit.

Thursday, June 5, 2014

Me trying to watch The Avengers

This was a few years back, I was on a plane, The Avengers was one of the films playing. After seeing Spiderman 2, both Hulk, The Incredible Hulk, and X-Men 3 I have resolved never to watch another comic-book based film ever again as long as I live, except if Chrisopher Nolan is directing.

However, since I had just transferred off of an 11 hour flight with a 3 hour stop-over, my computer's battery was dead and I had read my book. I had heard The Avengers had been pretty well received by the audiences. I decided to pass the time watching The Avengers since there was really nothing better to do. So I started watching it, albeit with very low expectations. Lowering my expectations has help make several movies more enjoyable for me, I understand that some films make no attempt at artistry and exist solely for entertainment purposes. If this is one of those films, that is fine with me.

The film opens, I cannot remember too many details except a few that really stood out to me.

There is Samuel Jackson's character with a long black leather jacket and an eye patch. There is some science-fictiony stuff happening with flashy, whispy flares of energy coming out of portals hovering in some gigantic heavy machinery or something, and there are movie scientists and movie military officers here and there in various position in the scene, all looking very busy and/or concerned about something.

Samuel Jackson's character says, "How bad is it?"

One of the movie scientist or perhaps one of the movie military officers responds to him: "That's the problem sir, we don't know."

Ouch. To this day I have spent many idle moments trying to think of a line in any film I have seen that was more cliche than that, but I honestly cannot. I hadn't slept well, I was tired and uncomfortable and this is going to be a long movie if all of the dialog is that bad. But hey, it was not even 2 minutes past the start, I trudged on lowering my expectations as low as they could go.

"I am Loki of Asgard, and I am burdened with glorious purpose."

"Loki? Brother of Thor?"

"Burdened with a glorious purpose? He sure uses big, epic words." But at hearing "brother of Thor," sarcasm could not alleviate my ennui nor my frustration at the fact that dialog this bad can even possibly exist in a popular big-budget film.

"Freedom is life's great lie. Once you accept that, in your heart..."

"So this bad guy here wants to destroy freedom? Wow that is a really bad guy. It's a good thing he said that, I was starting worry that this was turning into one of those heavy, evocative stories with lots of moral gray-areas." Not only is the dialog the most vapid I have ever heard in my life, but the conflict involves an enemy who wants to destroy freedom which has to be the most unimaginative antagonist I have ever seen in my life. Words cannot express how truly bad this is. I am at a loss.

"He's right. The portal is collapsing in on itself. You got maybe two minutes before this goes critical."

By now I am imagining a hundred sarcastic voices in my mind, all shouting at the film makers, all saying something to degrade how cliche that dialog is. "Oh no, so little time!" and "It goes critical? That must be bad!" and "Portal is collapsing! Dear god help us all!" were among the things that came to mind less than a moment after hearing that line spoken.

"Coulson, get back to base. This is a level seven. As of right now, we are at war."

I turned the movie screen off, maybe a bit more forcefully than I should have. I just couldn't take it anymore. They were at war, certainly, with my better sensibilities. In less than five minutes this movie has compromised every defense I have against the fiercest onslaught of stupid.

I tried to sleep after that, but my hope for a brighter future had been suffered a tremendous blow. Seriously, this is what your average American likes to see? Did who ever write that dialog seriously think I would be intrigued by the words "level seven?" Why did this receive such good reviews? Were they categorizing this film as a children's movie and judging it against other films in that category? Why do geeks like it? I thought I was a geek but apparently not, have geeks always been this stupid or have they become this way only in recent generations? Maybe this is a parody? It didn't seem like a parody. It wasn't advertised as a parody.

People will criticize me for not watching the whole thing. True enough, but why should I when the opening is the worst I have seen in my entire life? Does the dialog actually improve? I can see no possible way for this film to make me empathize with the main characters, or have any concern about the conflict with an antagonist who was not at all subtle or clever in the exposition of his goals to destroy "freedom." Only in some extreme circumstance would I ever even consider watching whole thing.

Sunday, April 20, 2014

Rick and Morty: My New Most Favorite TV Show!

Rick and Morty is one of the most intelligent and funny TV show's I have seen in my life. The characters are very realistic and sympathetic, and it exhibits a blending of realism with completely outlandish science fiction situations providing ripe opportunity for humor. Like all of the best science fiction, the story lines are peppered with accurate scientific facts and semi-plausible explanations, although scientific accuracy is not so belabored as to dull the dark, witty humor and surrealism. The artwork is detailed and beautiful, and there is an attention to detail rarely seen in animated TV, to the point where I feel compelled to watch each episode several times. It is near the level of brilliance of shows like Futurama, or what The Simpsons was in the mid 1990s.

The premise of the show begins with the assumption that the infinite multiverse hypothesis is true. That is the hypothesis where existence as we know it is following along only one of an infinite number of possible histories, so the universe where you did not procrastinate yesterday and got your work done on time, as well as the universe where you did procrastinate and did a half-hearted job, have both universes existing in parallel time lines with significantly different futures, perhaps one which leads to World War III, and one which leads to you earning the Nobel Prize. In the show, characters travel between all the different possible universes.


Image © Turner Broadcasting System, Inc. posted here under fair use for critical commentary.
Rick flips through a cover-flow of all the different Ricks in all the parallel dimensions.

Rick Sanchez, a somewhat degenerate mad scientist who constantly belches and stammers when he speaks, is one of the only humans with a sufficiently advanced understanding of science to travel between different realities. Although apparently interdimensional travel is a common thing for a multitude of other alien races, so much so that there is an "interdimensional customs" which looks a lot like customs at the airport.

Rick is divorced and lives with his daughter Beth, a 34-year-old veterinary heart surgeon employed at a horse race track, and her husband Jerry Smith an incredibly mediocre man with a not-so-successful career in advertising. Beth and Jerry have a very tenuous relationship. Married to Jerry after she got pregnant while they were still in high school at the age of seventeen, Beth is always thinking about leaving Jerry but never does. Their daughter Summer is seventeen years old and is a most ordinary teenage girl constantly focused on her smartphone. They also have a son Morty, who is fourteen years old and seems to be a little bit on the dim-witted side, but who occasionally displays flashes of wisdom.


Image source: disfiguredstick.tumblr.com
Fan artwork of the characters: (from left to right) Rick, Jerry, Beth, Summer, Morty, with their dog Snuffles.

So Morty is Rick's grandson. Being the youngest and least strong-willed, Morty is the easiest for Rick to boss around and always becomes Rick's sidekick on his various misadventures traveling between parallel dimensions to exotic places and times throughout the universes as they quest for things to help Rick do his scientific experiments. It has also been revealed that Morty is Rick's choice as a sidekick for slightly more devious reasons.

Frequently the rest of the family asks Rick to use his superhuman scientific prowess to solve idle problems in their lives (like opening a jar of pickles), and the solutions to these problems often degenerate into extreme and surreal situations that unfold while Rick and Morty are off in some other dimension on one of their adventures.

Although the humor in the show is quite dark, there is one aspect of the show I noticed that is completely unlike most other shows in this genre: death of any character, no matter how minor that character is, is permanent and is almost never something taken lightly. I don't know if this is a conscious effort by the authors to break from the trend set by shows like South Park, Aqua Teen Hunger Force, or Futurama which treat the death of characters with either impermanence or flippancy (or both), or if this is just some way for the authors to make the dark humor even darker in a Monty Python sort of way. Either way, in Rick and Morty, when a character dies on screen, not only do they never come back, but more often than not, there is some brief imagery or some mention in dialog of the fact that the recently departed had a personal story, or that they had a family, and that surely there would be others who would mourn the loss of that person. However what happens to the body after a character dies is completely up for grabs, and nothing is sacred.

Likewise, the characters in the show are genuinely caring of one another, completely upending the cynical "no hugging, no learning" trend set by shows like Seinfeld, and mimicked by myriad other shows, like Aqua Teen Hunger Force. This makes the characters more endearing, even if it means sacrificing a little comedy for drama. In this way Rick and Morty is show a bit more like Futurama.


Image © Turner Broadcasting System, Inc. posted here under fair use for critical commentary.
Notice the painting on the wall in the background: it is the famous photo Sallie Gardner at Gallop. It is little details like this that make the show really stand out.

At the time of this writing, Rick and Morty is still very new, having just wrapped-up the first season with 11 episodes. But already there is a huge community of fans. The first episode I saw was the sixth episode "Rick Potion Number Nine," and that alone was enough for me to see that this was really something worth the devotion of my time. Hopefully the fan base will grow and the show will be successful, and will stay fresh for at least a few more seasons.


Image © Turner Broadcasting System, Inc. posted here under fair use for critical commentary.

Upgraded my Ubuntu to version 14.04 "Trusty Tahr"

I just updated my system to Ubuntu 14.04 "Trusty Tahr." I can't believe it is time for another Long-Term Support (LTS) release. Two years flies by so quickly.

As usual I just used the do-release-upgrade command right in the command line (as root). Updating went smoothly except for two things, the Lock Screen key chord, and the Adobe Flash player for Chromium Browser.

The Chromium-browser (the GPL version of Google's Chrome Browser) was not loading the flash plugin. I decided to install Google's non-free licensed Adobe Flash Player "Pepper Flash" which is distributed with Chrome, but not the GPL-Licensed Chromium Browser because of the legal conflicts. I believe Pepper Flash is licensed by Adobe to Google specifically for them to re-distribute with the Chrome browser, and it is the latest version of the Adobe Flash player (version 13), unlike Adobe's official release of Flash Player which they no longer support on Linux (version 11 is the last version they support on Linux).

You can install Pepper Flash right out of the multiverse/web section of the Ubuntu package repository: apt-get install pepperflashplugin-nonfree

This will install a few things, most importantly:

  • /usr/sbin/update-pepperflash-plugin    A shell script to download and update Pepper Flash
  • /usr/lib/pepperflashplugin-nonfree/libpepflashplayer.so    The actual plugin used by the browser.
  • /usr/lib/pepperflashplugin-nonfree/etc-chromium-defaults.txt    A shell script you should copy to the path "/etc/chromium-browser/defaults" if that "defaults" file does not already exist
  • /usr/lib/pepperflashplugin-nonfree/pubkey-google.txt

First you have to execute (as root) the update script with the --install command line argument:

/usr/sbin/update-pepperflash-plugin --install

Unfortunately, Chromium was still not able to detect that it had been installed. But the /usr/bin/chromium-browser executable is just a shell script that launches the actual Chromium Browser executable, and by reading through the code in the script I discovered that it was trying to detect the flash plugin using information stored in the file /usr/lib/chromium-browser/pepper/pepper-flash.info. For whatever reason this file did not exist. You would think it should have been installed by the /usr/sbin/update-pepperflash-plugin script, but no. But from the code of the /usr/bin/chromium-browser launcher script, it was perfectly clear exactly what information this ".info" file should contain. So I made my own /usr/lib/chromium-browser/pepper/pepper-flash.info with the following contents:

PLUGIN_NAME='Pepper Flash';
FILE_NAME='/usr/lib/pepperflashplugin-nonfree/libpepflashplayer.so';
VERSION="$(strings "${FILE_NAME}" 2> /dev/null | grep LNX | cut -d ' ' -f 2 | sed -e 's/,/./g')";
VISIBLE_VERSION="${VERSION}";
DESCRIPTION='Non-free Flash Player Plugin distributed by Google';
MIME_TYPES='';

If you are going to criticize me for using a terrible hack to set the VERSION environment variable, know this: that is exactly the same line of code that the /usr/sbin/update-pepperflash-plugin install script uses to detect the current version of Pepper Flash. Yes it is a hack, and they assume you have the /usr/bin/strings utility installed which is only available if you have installed the binutils package. But that is just how they programmed it. Not my fault.

Anyway, once I wrote that /usr/lib/chromium-browser/pepper/pepper-flash.info file, Chromium Browser was able to use it, and the Pepper Flash plugin showed up in the list of add-ons at the URL chrome://plugins.

The other problem was that the Control-Alt-Esc key chord for locking the screen has been disabled. To lock the screen, you MUST use Super-L, that's the logo key with the letter L for "lock." This is really annoying, and nothing I do changes it back. I tried to change the key chord in both the Gnome Control Center and in Compiz Configuration Settings Manager (CCSM), but neither work. When I changed the key chord from Super-L to anything else it simply disables the screen-lock key chord entirely. You MUST use Super-L or nothing at all. I don't know why they did that. If I can figure out how to change it I will.

As for the remainder of the changes to Ubuntu, I am quite pleased. All in all, it isn't too different from before. But Unity and the rest of the Ubuntu user experience is getting more and more stable as time goes on.

The UI elements have been modified slightly. It is always nice to change things up a bit, make it feel fresh. The design of the GUI widgets are a bit more flat, it reminds me a little bit of Windows 8. Obviously this is Ubuntu's way of making their graphical interface look more like what you would see on a mobile-device. Also, it could be my imagination but I think the default fonts are set a bit larger than before. This is nice because it makes things easier to read. Even though my eyesight is near perfect, I would like my eyes to stay that way, which is why I like larger fonts.

And that's all. It seems to be a good, stable release so far, as it should be because it is an LTS release.

Thursday, December 26, 2013

What is Computer Science

(originally posted on Reddit.com)

Computer scientists use the scientific method to study the universes that exists within computers. We design algorithms and hypothesize about the meta-properties of those algorithms. Then we try to falsify the hypotheses about the algorithms by running experiments. In computer science, running an experiment involves actually running the algorithm in a controlled computational environment while measuring various quantities to see if those quantities match our predictions. Examples of things we might quantify and measure are:

  • will the algorithm take fewer steps to complete?
  • will the algorithm take less memory to complete?
  • will the algorithm do more work in less time?
  • will the algorithm loop indefinitely or not?
  • will the algorithm produce good random numbers, or will it's output be predictable?
  • can we share the computational steps among multiple computers that run in parallel? If so will it do more work in less time?
  • will the algorithm produce useful graphical visualizations?
  • will the algorithm classify data into useful groups?
  • will the algorithm "think" like a human, can humans relate to it?
Like in natural science, you can hypothesize all you want, but there is no real way to know if your hypothesis is correct until you actually run the experiment (program it and run it).

And if you should doubt that the a computer qualifies as a "universe," I would say that the only difference between a computer and the real universe is that we humans actually built the fundamental laws of nature for the computer universe. Whatever designed the fundamental laws of nature of the real universe is a matter of much speculation, but without having any documentation we can only approximate our universe's true nature through a process of reverse engineering called "physics."

Tuesday, April 30, 2013

Installing Ubuntu 13.04 "Raring Ringtail"

So, I decided to update my system to the next version of Ubuntu 13.04 "Raring Ringtail", this time within less than a week after it was released. Usually I wait a few weeks for the bugs to be worked out, there are always a few bugs that they let through because they are on a strict release schedule and not enough beta testing resources available to them (nobody has enough, really).

When I got my new computer this year I installed Ubuntu 12.10 "Quantal Quetzal" and I had zero problems, not even with my Nvidia graphics card. Lots of people had problems when upgrading, the apt-get had a bug of some kind that either installed the wrong driver or disabled a working driver, or some such thing. I did a clean install (since I had a brand-new blank hard disk) and I simply selected the Nouveau driver and it just worked as though by magic.

I have been updating Ubuntu since about version 10.04 "Lucid Lynx" and it has never given me a problem. Not this time, this time it was a bit of a fiasco.

I do accept partial responsibility, after all I was using B-Tree File System "btrfs", a new file system that is clearly marked "Experimental" in the Linux kernel. And I am sure the Ubuntu developers tested it on Btrfs and it worked fine. But they didn't test it in a low disk space situation and that is where they are responsible for the minor disaster that ensued.

What I Learned

  1. "Nouveau" has a U after the "No", (seriously, I couldn't find it because I was misspelling it in the package search!)
  2. Btrfs file systems live much more comfortably in a single large partition that takes up your whole disk. I had split my disk into a "root" and "home" partition (as always) but formatted both as Btrfs. Don't do that.
  3. Reformatting a partition changes it's UUID, and this can confuse Grub and make your system unbootable.
  4. The /tmp directory is no longer a RAM disk "tmpfs" for various reasons which I don't understand. However it still needs to be world writable with the sticky-bit set. Failure to do so will cause all kinds of problems for most applications because that is the directory used by system calls to create temporary files, which means all applications are built on the assumption /tmp is world readable directory with the sticky bit set.

Too Long; Didn't Read

In short, Ubuntu's do-release-update program detects a Btrfs file system and intelligently installs the system updates into a separate subvolume so you can easily roll-back if the system update fails. However, since my root partition was too small, the new subvolume containing the updated system filled up the entire partition which broke the update process.

I fixed the problem by reformatting my root partition to an Ext4 file system (which changed the UUID of that file system), then I restored the previous system from a tar archive backup of the root and boot file systems. But I had to update the /boot/grub/grub.cfg and the /boot/initrd.img because these files (restored from the backup) contain references to the old UUID of the root file system. The initrd.img contains a RAM-disk file system which mounts the root and home file systems, and the /etc/fstab file in this initrd.img mounts Root and Home by referring to their UUID.

Finally, once the old operating system was bootable and running again, I ran do-release-update once more, and this time it worked as expected -- it did not create any Btrfs subvolumes as it was just an ordinary Ext4 file system, it installed the update by overwriting the previous system, which doesn't take up as much disk space and could therefore be done within the limited space of my root partition without incident.

So what happened?

Well, first off, I backed-up my existing system:
tar czvf "$HOME/sys-backup.tgz" /boot /etc /lib /bin /sbin /usr /var /lib64 /srv /vmlinuz /vmlinuz.old /initrd.img /initrd.img.old
and then I made sure I had a USB memory stick with the Ubuntu Live install image installed onto it with USB Startup-Disk Creator. This is just common sense: if anything goes wrong, you need a backup and live operating system so you you can at least boot your computer to the point where you can copy the backup.

Then I ran the command do-release-upgrade. It downloaded the updated packages and began installation -- and the failed about half-way through with the message "no space left on device." Well I had this problem with my tiny old laptop, I remember running do-release-upgrade and watched the remaining disk space disappear, waiting with baited breath as the update ran praying to the disk gods that I didn't run out of space which would cause the update to fail.

When I got my new computer I doubled the size of my root partition so I would never have to worry about free space while upgrading again. Imagine how enraged I was when I saw the words "no space left on device." I checked the disk space, there was still 15GB remaining! What was going on?

So I am not defeated yet. I can just reboot into the Live USB system and figure out what is wrong. Once the Live CD was up and running, I mounted Root and immediately noticed that there weren't any files in the root, there was instead what looked like two directories, one called @ which contained and another called @apt-snapshot-release-upgrade-raring-2013-04-29. Aha! Btrfs is the culprit here.

Less than five minutes of Googling later, and I see what happened. Btrfs is designed to take up an entire disk with a single large partition. Instead of partitioning, you will create "subvolumes" which can very easily be frozen into snapshots and reverted without rebooting, easily backed-up transferred to other mediums, added to RAID volumes, and all kinds of handy things.

So what happened? Ubuntu's do-release-upgrade saw I was using Btrfs and very wisely created a separate Btrfs "subvolume" for installing the new system, the idea being that if something went wrong, you could easily revert to your old system. Unfortunately, I learned this the hard way. THE PROBLEM IS since my root file system was so small, creating a second subvolume just ate up all the remaining space until it failed. I had 15 gigabytes remaining of a 30 gigabyte partition. 15 just wasn't enough space for a copy of the previous file system plus the new file system with all the downloaded package files.

Ubuntu developers should have tested this do-release-upgrade scheme on smaller volumes. 20 or 30 gigabytes would be good. Anyway, the reasoning for creating a separate subvolume was intelligent, even if it was prone to fail on smaller file system -- which is unfortunately how I had very intentionally setup my system. Had I known at the time that Ubuntu would detect a Btrfs file system and behave differently, and that making a separate subvolume was Ubuntu's strategy was for easily undoing an install gone bad, I would have simply looked-up how to revert the system. Instead what I did was much more fool hardy -- I went into the @apt-snapshot-release-upgrade-raring-2013-04-29 directory and typed rm -Rf *. Imagine my surprise when I saw the rm command fail with the error message "no space left on device." How can it take space to remove something?

Well, it turns out, when a Btrfs volume runs out of space, it treats this situation as a catastrophic failure and simply freezes up. When I say "freezes up", I don't mean it freezes the computer, I mean it just freezes all the data -- it doesn't let you touch anything. It might make more sense to set a "read-only" flag or something so rm returns a "read-only volume" error instead of a "no space left on device" error. But for whatever reason, Btrfs decides to defend your data by returning a "no space left on device" error message for anything you try to change, even removing files. Apart from reading files, you cannot touch it.

So I am like, "screw this," and umount /mnt/rootfs ; mkfs.ext4 /dev/sda2 ; . *Poof* reformatted, no more Btrfs, now things will go back to making sense.

Next I revert the system using the tar backup I created before this little glitch. Very simple, just go to the /mnt/rootfs mount point and tar xzvf /mnt/home/@home/ramin/sys-backup.tgz (my Home volume was also Btrfs).

In short, I just did what Btrfs was supposed to do for me, revert from a backup, except I didn't have to look-up any commands to do it, I did it entirely using commands I already new... except for one little problem: now my root file system has a different UUID because I had reformatted it, but the UUID's recorded in the /boot/grub/grub.cfg and /boot/initrd.img files still refer to the old file system's UUID.

So I reboot, thinking everything might go back to normal, and if not, I will just reinstall Grub. Well, it wasn't my day. So I reboot and as I partially expected but was really hoping wouldn't happen, Grub starts complaining that it can't find the disk that has the Linux kernel. It was supposed to be on a disk with UUID=0123abcd-ef45-6789-0abc-def012345678, and there aren't any partitions with that UUID. So I realize my mistake, I realize I should have fixed the Grub config file after I reformatted my root file system. So I go back into the Live USB system, do the chroot thing and run the grub-install command and...

  Embedding is not possible. GRUB can only be installed in this setup by using blocklists.
  However, blocklists are UNRELIABLE and their use is discouraged.
I forgot to use the "--force" command line option to ignore this error. But at this point I am frustrated and have forgotten everything I know about Grub because I only have to work with once a year or so when something really goes wrong and I am in a hurry to try and fix things and it never works right the first time and one year is enough time for me to forget everything I learned so I have to go back to the manuals and start reading it all over again to figure out how to fix things.

Fortunately, I did not use the "--force" option (sorry Yoda) and I instead started freaking out, shouting at my computer. Had I not done that it would not have occurred to me that it wasn't necessary to reinstall the boot loader, I just needed to rebuild the /boot/grub/grub.cfg file which is done simply by the chroot trick, and running the grub-mkconfig -o /boot/grub/grub.cfg command.

So that solved that problem, and the Kernel now loads and begins booting, but it fails to boot all the way and kicks me into a recovery shell. Why? The initrd.img contains a file system which contains an /etc/fstab file which has not been updated to reflect the modified UUID's, which means the initial RAM file system that mounts all the other file systems, including Root and Home, cannot find Root because the UUID is wrong. So I need to rebuild the initrd.img with the update-initramfs -u command.

But at this point I say to myself "screw it" again and go back into the Live USB system. I reformat the root file system again and do a fresh install of Ubuntu 13.04 from the Live USB system installer. I reboot and it is ready to go, except not. When I log in, the screen goes dark and eventually gnome-session crashes and kicks me back into unity-greeter. So I say "screw it" again, and install Xfce4, which I really love, but not as much as Unity.

So I set about tweaking my Xfce4 desktop environment so it works just right, trying to figure out whether or not it is working with the Nouveau graphics driver, trying to get the audio to work when I play a YouTube video. Then something occurs to me. The reason I am using Ubuntu in the first place is because everything just works, or that was how it is supposed to be. Looking through all the little details of Xfce4, seeing how many things I have to install and tweak, trying to get the audio to work, trying to get the Nvidia drivers to work, this is what I did in grad school, this isn't what a busy professional should be doing, tinkering with the stuff in his computer that should "just work." I need to get a clean Ubuntu installation working with all the defaults and proprietary drivers installed all in one go without any tinkering or hassle, like it should.

And it had worked before! Ubuntu was working, the Nouveau drivers were running smoothly, the audio worked without me giving it a second thought. Why couldn't I get it to work this time. I shouldn't settle for just "whatever I can get working," no matter how much fun it is to tinker with Xfce4. I need to do this properly.

So I backup my system again (in case I should regret my next move) and erase the entire Root and Boot file systems. Then I go back to the sys-backup.tgz archive I made before this fiasco began and unarchived it, putting everything back the way it was. I rebooted into a recovery shell, chroot-ed and installed the new /boot/grub/grub.cfg For good measure, I also run the update-initramfs -u to make sure the /etc/fstab in the initial ram file system also uses the correct UUID's for the reformatted the root file system.

And boom, the system is back and breathing, but laboriously. When I login from the beautiful graphical unity-greeter, the screen blinks, going black for just a moment, then I am back in the unity-greeter. But I can still login by CTRL-ALT-F1 switching to a TTY terminal.

Now I have my old system back and apart from the gnome-session everything seems to be running OK. I check the /var/log/Xorg.0.log and Nouveau is back and running smoothly. Now I have an "ext4" root file system instead of "btrfs", which means the do-release-upgrade should work more predictably without creating any subvolumes that eat up all of the space and cause catastrophic installation failures. So I run do-release-upgrade and.... finally something works right!

Now I have a functioning Ubuntu 13.04 "Raring Ringtail" installation, and it saw I was using Nouveau and installed the latest Nouveau driver for me, and it installed the proprietary drivers (MP3 and AAC), and it setup the Grub configuration file properly. Everything went smoothly this time.

Except for one thing

When I login from unity-greeter it still blinks (the screen goes black for just a moment) and then immediately goes back to the unity-greeter. Something is wrong with gnome-session. Of course often the simplest problems can take the longest time to figure out.

After a lot of Googling, and looking at the /var/log/Xorg.0.log hoping I haven't misread it and hoping Nouveau is actually still working correctly (it isn't just my imagination, how nice the unity-greeter looks?), and writing a throw-away Bash script that figures out exactly which log files are updated after a failed login, and running diff on the logs files from before and after a gnome-session crash, I notice that the /var/log/Xorg.0.log is failing because of two things: Pulse Audio cannot create a socket /tmp/tmp.cwJeZDKn2z, and the X-Keyboard device was failing because of an error (and I am paraphrasing) "xkbdcomp could not compile the key map possibly due to the a mistake in the xkeyboard-config". But to where was xkbdcomp compiling it's data file? Of course, /tmp. So there are two unrelated systems, both failing due to a similar problem, in this case writing to a file system. That indicates a permissions problem.

In all the things I had tried in fixing my system I had erased the entire root file system, including the /tmp mount point, and /tmp was not included in my sys-backup.tgz archive. While I was in the recovery shell recovering the old system, I had created an ordinary /tmp directory with ordinary user permissions, and this directory was never replaced during the do-release-upgrade process. So when the system rebooted, the /tmp directory was just a plain-old directory with restricted permissions such that that could only be written-to by the root user.

So one final command:
sudo chmod 1777 /tmp ; sudo reboot ;
Then I login, and I am running Ubuntu 13.04 "Raring Ringtail" as if it had always been that way. You win, game over!

What Ubuntu could do better

I wish Ubuntu developers would do just two things to prevent something like this from happening:

  1. If you detect someone using Btrfs, check how much space is available before creating a subvolume for the distribution upgrade. The upgrade will fail if there is not enough space, and worse yet, this will freeze up the Btrfs volume, which for Btrfs beginners could be very difficult to fix. The 1/3rds rule is a classic engineering heuristic which you would be wise to follow: if the amount of space used by the present operating system installation on the Btrfs partition is more than 1/3rd of all available space, don't create a subvolume for the installation, instead just treat it like you would an Ext file system.
  2. Run a permissions check on all of the most important files, this includes all the directories in the root file system (especially /tmp), and all the program files in /etc /bin /sbin /usr/bin /usr/sbin and possibly also the "$HOME/.ssh" directories of every user, then repair the permissions of the files and directories that are not right. This is a simple way to prevent a lot of very seemingly-complicated problems.

In fact, I ought to submit that as a bug or feature request to the Ubuntu people directly.

As for Btrfs...

I am still using it for my /home partition. I am always making backups of this file system, and I think with Btrfs this could become much easier. I will write another blog post if I ever figure out how to make this work for me. So Btrfs stays, and I intend to play with it more.

However there is no real need to use Btrfs for your root or boot file systems, it is easier to just use the old "Ext4" file system format. This especially goes for a small laptop, unless you intend to run several different operating systems and want to drop in one or the other at a whim, or you intend to make a lot of changes to your system and you need an easy way to undo mistakes without backing up everything by hand using tar. If you must use Btrfs for root, just make sure you allow Btrfs to take up the entire disk, not just a single partition.

Thursday, January 10, 2013

Experiment on Linux: A Web Server in Just One Line of Code

So, you want to learn about web applications? You have just installed Linux, and you figured out how to open a terminal and install web server software like Apache. But haven't you ever wondered exactly what it is the Apache server is doing? What magical change to your Linux system has it effected so that your computer suddenly acts like a web server? Well, Apache is a system that launches several programs that we call "daemons" that do nothing more than sit and listen for TCP sockets to be created on the HTTP port number 80. The operating system takes care of figuring out whether or not the IP address and TCP port numbers match, if they do, they awake your Apache daemons and plug the information that was stored in the TCP packet directly into the daemon. So anyone who places a TCP packet with the correct IP address and port number onto your network that can be detected by your computer's Internet Protocol interface will trigger these web server daemons into action.

Here's a trick you can try right as soon as you install Linux, even before you install a Web server.

Open a terminal and enter this command:

nc -l 50080

Nothing will happen right away, just let it sit there. What this command means is "open a TCP socket and wait for something to try and connect to it on channel 50080.") HTTP will always listen-in on channel 80, but your Linux system guards this channel, so you need to use channel 50080 instead. Then open a web browser and type "http://127.0.0.1:50080". Using 127.0.0.1 will cause your browser to communicate with your computer over its own TCP socket, instead of connecting to the Internet, and of course ":50080" means to broadcast on channel 50080, instead of the default channel 80.

Then, in the terminal window where you typed the nc -l 50080 command, you will see exactly the information that your browser sends to any website that it tries to communicate with, written out in the HTTP language. The browser will not show anything, it will act like it is waiting for a web page to load. It will probably just get tired of waiting and eventually show you an error message. At any time you can go back to the nc -l 50080 terminal and cancel it by pressing "Control-C" (C for cancel).

So here is what my Firefox says to every web server it ever meets:

GET / HTTP/1.1
Host: 127.0.0.1:50080
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20100101 Firefox/17.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive

If you were a genius hacker, then instead of cancelling the nc -l 50080 terminal, you could type commands in the HTTP language directly into the nc -l 50080 terminal, followed by the web page document written in the HTML language. This will be fed back to your browser and cause it to display the HTML document, assuming you made no mistakes and typed everything before the browser gets tired of waiting (typically you have about 2 minutes).

The point of this is to demonstrate that the basic technology of the Web is all very straight-forward: TCP sockets are like two-way radios, and everyone agrees that channel 80 is the HTTP language channel (like how in your local community, channel 52 might be the Spanish speaking TV channel). HTTP is human readable code. Whenever you connect to a website, your browser is speaking this language over a TCP socket, and the server on the other end of the Internet is talking right back. It all happens in an instant, but you can make it all happen by hand if you want to.

We often work with grand server software like Apache, but the job of the server software is really just to provide to you an elegant tool for dumping HTTP code onto a TCP socket in a way that intelligently responds to the web browsers that are communicating with it. The function of the TCP socket is simply to feed that code onto the Internet for you. So a server can be anything that lets you read and write code on a TCP socket, even just one single command, like netcat.

Linux gives you the tools to invent your own space in the Web, right down to the smallest detail, but still provides a simple way to freely install enterprise-quality server software that handles all the details for you.

One final note: don't ever say "TCP channel 80", say "TCP port 80", or port 50080. We don't call ports channels, even though ports are almost exactly like TV or radio channels, they are called "ports." Also, don't refer to HTTP as a "language," (even though that's what it is, a language), you should call it a "protocol." HTTP means Hyper Text Transfer Protocol. HTML is a language, the Hyper Text Markup Language. What's the difference between a language and a protocol? Not much, that's just how the jargon has evolved.