Linux Chrome now with Flash & extensions (+adblock)

12 07 2009

The Linux version of Chrome has been coming along fairly quickly, the latest development build of Chrome (Chromium) for Linux now works with Flash and has extension support. It is also possible to configure the options (although there are still some TODO stubs so setting a proxy isn’t possible, EDIT: Try the –proxy-server argument). Tested under Ubuntu 9.04 64bit.

★ ☄ Sleepy kitten  ☆ ☽

★ ☄ Sweepy kitten. ☆ ☽

Update (05 Mar 2010): Google now have a proper version of Chrome with flash, themes, greasemonkey and extensions. Including .deb packages for Ubuntu (And packages for Debian, Fedora and OpenSUSE). Simply grab them from the the Chrome site, no other setup needed. They will also install repositories to keep things up to date. They are ‘beta’ but there more likely to be stable than grabbing the bleeding edge ones from the chromium-team repo (there is an ‘unstable’ packages too). I was running into issues with Chromium freezing up (mainly Flash related) which are not an issue with the official Google Chrome build.

There is also a fairly good Adblock extension. It includes the same filterlists as the Firefox one. If you need to block something extra hit ctrl+shit+k and you get a handy wizard where you can just click on whatever you want to nuke.

I also recommend giving the HTML5 version of YouTube a try. It seems faster than the flash one and things like seeking are quick. Full screen has a few issues. In order to activate it you need to first popout the video using the icon up the top right of the video, although it’s much faster to popout and Flash since it doesn’t need to rebuffer the video like Flash often does. I did have some sluggishness of the controls in full screen but the video playback works fine. Also for some reason it goes back to the Flash player when I am logged into YouTube with a user account, but works fine without a login.
/update

Old instructions:
To install under Ubuntu:
sudo su
echo "deb http://ppa.launchpad.net/chromium-daily/ppa/ubuntu jaunty main #chromium-browser" > /etc/apt/sources.list.d/chromium.list
sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 0xfbef0d696de1c72ba5a835fe5a9bf3bb4e5e17b5
sudo apt-get update && sudo apt-get install chromium-browser

To enable Flash support:
cd /usr/lib/chromium-browser/plugins
sudo ln -s ../../flashplugin-installer/libflashplayer.so

For extensions:
Start browser with the following:
chromium-browser --enable-plugins --enable-greasemonkey --enable-user-scripts --enable-extensions
Clink on a crx link (such as adsweep) and browse to chrome://extensions/ to check installation.





Linux/MacOSX/Windows/Vista desktop usage percentages

23 08 2008

I was looking at some of the data from the w3counter and thought I would graph it out.

OS desktop usage % (Grouped)
Here we can see that the usage changes very slowly, Windows does have a slight overall decline and both Linux and MacOS have increased slightly.

Linux vs Mac vs Vista %
This shows an increase in both OSX and Linux usage up until Vista overtakes them, then they both level off, its interesting that the 2 coincide, possibly due to large scale acceptance of Vista. Mac usage seems to have fallen off slightly more than Linux usage although both are still higher than they where but not gaining as much ground as they where.





Open Video Codecs and Flash

8 05 2008

General
Theora
Xvid
OMS Video
Dirac / Schrödinger
Flash / flv / f4v


General


When a standard is open it allows for a huge adoption of it by anyone, anyone can use it and be sure that their data isn’t locked away and they have to deal with a specific company if they want to access their own content. Open Standards are what runs the Internet. The problem is that being an ‘Open Standard’ isn’t all that’s required. H.264 for instance is an Open Standard but its not royalty-free as there are patents on it, and it requires a licensing fee for implementation. While these licenses are cheap and easy to obtain for companies making them attractive, they block the formats for the non-commercial open source community. You are still allowing a 3rd party to dictate the requirements for access to your data.

This is where the much hated software patents come into it, you cannot distribute patented software in binary, precompiled form as a patent has to be applied to a physical object (thanks to a court case in America binary code somehow now counts as such, while other countries have various laws America is where Silicon valley is so we all loose out, some countries seen to be specifically making exceptions to allow patents to be applied on computers). You can distribute patent software in source code since its not an actual implementation of it. As for if you can legally compile that code for personal use various from country to country, there is some discussion on that here.

Firstly its important to have an royalty-free unencumbered codec for use in streaming video for things such as Firefox and Linux/Unix distributions to be able to legally play back these formats, patents are the reason that in order to support MP3 playback you have to install codecs (which in newer distributions is a lot easier and automatically setup). Commercial distros can afford to pay the patent license fees but this isn’t much help for for the open source community, or hobbyists, Ubuntu/Debian/Fedora/Gentoo/Arch/BSDs etc… aren’t commercial distros, they don’t charge you so they can’t pay for the codecs and if they could pay for them then the media is still in a format that is locked away accessible on the whims of the patent holder.

Since the HTML 5 draft (due to be finalized 4 years from now in 2012) included video streaming, having a decent open codec is more important now than ever before, originally the draft had mentioned the use of Ogg however Nokia and Apple raised objections concerned about hidden ‘submarine patents’, low compression ratio and lack of hardware decoders, Nokia wanting support for H.264 (which also happens to be the codec Apple is already using for iTunes/iPod video along with AAC for audio) or alternatively leaving out streaming video and letting corporations fight it out. H.264 being impossible to include in the standard.

As for the royalty-free video codecs that around around we have, Theora, DIRAC and OMS.


Ogg Theora


Firstly there is the oldest and most widely known Theora codec, often referred to as “Ogg Theora” as its contained in the Ogg container format, not to be confused with Ogg Vorbis which is an audio codec designed to be a royalty-free alternative to MP3, also lives in the Ogg container format and is often used to provide the audio for Theora videos in Ogg format.

Theora is a project of the Xiph.Org foundation (also responsible for the royalty-free codecs, FLAC for lossless audio and Speex a voice audio codec with an extremely good compression ratio), its based on VP3 which was donated to the public by its creator On2 who dropped all claims on it.

Unfortunately is seems that Theora is now out of date and has fairly bad compression when compared to other codecs. Xpih.Org are apparently working on an improved version of Theora for HTML 5 but with the binary format locked for compatibility its unclear to me if it can be improved enough to reduce file sizes and improve quality or if its just work on improving the tools around Theora.


Xvid


Xvid is apparently a royalty-free codec, originally from OpenDivX code it was forked when the DivX 4 closed source. The problem is that Xvid is based on the MPEG-4 standard which has 2 dozen companies claiming patents on it and licenses are apparently no longer being offered.


OMS Video


Sun’s Open Media Commons recently announced OMS Video, and open coded, the audio component is using the video component is based on H.261 which is out of the 17 year patent restriction, then adding newer unpatented technologies. Currently there isn’t anything from them yet code wise. Another worry is another Open Media Commons project, DReaM, its a DRM specification, as far as DRM goes it seems less evil since its designed to be open and royalty-free itself but its still DRM, in the end as long as the DRM isn’t built into OMS it shouldn’t be a problem but I have a small concern that they will use OMS as an infection vector for DReaM. The announcement and specification overview don’t mention DReaM at all other than saying its also part of Open Media Commons so its probably not an issue but worth watching. Fortunately DRM is its own worst enemy, DReaM is supposed to bring an open royalty free DRM system to allow music to interoperate but DRM seems less about protecting music and more about online music retailers locking clients to their system/devices, one someone has a whole database of DRM’d songs they will have to buy hardware that supports it for ever and keep shopping at the same place, they can never leave (at least not without breaking the DRM or loosing all their music), you can read more about why DRM sucks at the Defective By Design website.


Dirac / Schrödinger


The BBC who have been experimenting with streaming video created Dirac (wikipedia) which is designed to be completely unencumbered by using patent free technologies. Wikipedia says it is in the same range of compression as h264. There is an implementation of DIRAC called Schrödinger which has libraries, gstreamer plug-ins and is intended to get it in the Ogg container.


Flash / flv / f4v


Recently Adobe with their Open Screen Project, opened flash and the flv/fl4 format for use without license restrictions, the swf specification and the flv specifications are already published. This is great news for projects like Gnash however my main concern however is that flv has technologies using patents in it. For instance flv in Flash 9 supports AAC for audio and the Wikipedia article on ACC says:
“…a patent license is required for all manufacturers or developers of AAC codecs, that require encoding or decoding. It is for this reason FOSS implementations such as FAAC and FAAD are distributed in source form only, in order to avoid patent infringement.”. This makes it seem like even though the license restriction is removed, the open source community will benefit from having the API’s available but not be able to actual make a binary version of the flash client. You won’t be able to expect flash to be built into Firefox or shipped with Ubuntu. The real clients of Adobe will still likely need a license from Adobe unless they want to go to patent holders such as AAC and independently obtain licenses (likely to end up costing more in the end). Another format used is MP3 which has a whole load of parent issues, the MP3 decoding patents run out around 2012 and the encoding later around 2016 (Ive seen various different times but their fairly close, there is a big list of mp3 patents but it doesn’t say what is needed for decoding/encoding and whats optional, the latest is 2017), flv also uses yet another commercial proprietary codec Nellymoser.

These are just the audio codecs for the video there is H.263 since Flash 6 and as of Flash 8
VP6, I haven’t found much information on the license issues around them but they do seem to be patented. Wikipedia says “As of September 2006, an open-source implementation of the decoder is part of the libavcodec project, though producing or dealing with VP6 video streams inside libavcodec/libavformat seems to be discouraged and/or refused due to clashes between the ffmpeg’s developers and On2 technologies by a claim of Intellectual Property and Trade Secrets Infringement made by the corporation itself.”

As for Flash itself I have no idea about what other patents on the technology exist when we live in a world where anti-aliasing fonts is patented. In order for flash to really be open source friendly we would need to see it adopt patent free codecs for flv (such as DIRAC, Vorbis, Theora or OMS).





Things are looking up for Linux game support

22 10 2007

While Linux probably isn’t quite ready to be a operating system choice for gamers, Linux users who happen to want to game are in for a treat.

Recently released was a native client for Enemy Territory: Quake Wars which I have been having fun playing the last couple of days. Many people have been claiming it as a BF2 rip off (mostly BF2 players) however the gameplay itself is completely different even if there are quite a few similarities (plus BF wasn’t the first game to implement its class system or vehicles, just one of the more memorable, also its something that UT2003 already did). Its a much faster passed game so there is very little waiting in a corner waiting for someone to come and capture a flag or running across the map for 5 min until you get to one, although a lot of the team play has been stripped down but this just makes it play more like a standard FPS which isn’t bad, just different. There is a list of important to note differences for BF players here.

And out next month is Unreal Tournament 3 which is getting a native client, theres a Windows beta demo out and a Linux one on its way, when ETQW is mentioned people generally cry that UT3 is better, personally I’m going to buy both although its hard to tell from prerelease hype and a beta demo exactly how good a game is going to be. They both seem like great games, and since UT3 has both FPS and BF style gameplay it should be flexible enough to keep interest.

Source games such as Team Fortress 2 are working great under WINE with the same performance as under Windows (You might loose %5 but make up for it with lower lag, the advanced shaders can apparently be enabled with a setting if you want), with the whole Orange Box going for $50USD (About $56 AUD thanks to America ruining its economy). The latest version of Wine 0.9.47 runs Steam great, although I did run into a problem with purchasing Orange Box through PayPal since it opened PayPal in Firefox but then Firefox wouldn’t execute the steam://paypal/return command, I was worried for a while that it was going to charge me without adding the game but PayPal showed no payment, I coped out and booted to a windows partition and brought it through there but its probably possible to manually pass the command with something like “wine ~/.wine/drive_c/Program Files/Steam/Steam.exe paypal/return” or set the protocol association in Firefox to run the command but I haven’t looked at it too much. now I’m awaiting my TF2 and HL2E2 download, already beat Portal which was a fun game although a bit too short hoping there is a squeal in the not too distant future. Valve recently posted that job for a Linux games programmer and have already ported source to use OpenGL for the PS2 so we could see a native Linux client in the future.

EDIT: I just tried HL2:E2 seems to have some graphical problems with the shaders turning everything bright colours, running without them causes crashes however you can run with the game in DirectX 8 mode and loose some graphical detail, this is probably something that will be fixed fairly soon since it seems like a simple bug, they already fixed some similar problems with Portal.
EDIT2: Use wine 0.9.46 not 0.9.47 this works without the -dxlevel 80 flag, I had the same problem with TF2 that I did on HL2E2, works great with 0.9.46.

Wine’s seems to have most of DirectX emulated, the main problem is a few minor bugs that crop up in games, such as the mouse cursor being stuck or leaving the window etc… Most of the bugs that are left are minor but make games unplayable and are often specific to only the one game. Unfortunately there are enough of these that most games don’t run but its certainly getting there, presumably a lot of these are in the target for Wine 1.0

Wine is improving quite fast, probably faster than new specifications are being produced and with many games ensuring that DX9 is supported due to the slow adoption of DX10 and with the OpenGL 3.0 specification approaching release its might make implementing the DirectX>OpenGL wrapper a whole lot quicker since it seems to support many of the same features, we could see WINE running more games off the shelf than ones that don’t within a few years.

Virtualization could also be another great way to run games under Linux but with %100 compatibility although requiring a copy of Windows, all that would be needed is a way to allow direct access to the video card, this can actually be done under Xen but requires a 2nd video card since the first one will be locked by the BIOS at boot. Alternatively a DirectX>OpenGL wrapper in the windows install could work, I hear this is how parallels works using the WINE one, but it might sacrifice some compatibility and speed. OpenGL can already run from a virtualized environment with VMGL, with this and WINE’s DirectX it might even be possible already. Maybe some official support from nVidia/ATI would expedite things.

Theres some interesting history about WINE’s DirectX implementation and information about a DirectX 10 implementation being underway here.





When solutions to problems just confuse and anoy

1 10 2007

I just spent 2 days trouble shooting my network connection (well actually I spent a majority of it playing and ignoring the problem but anyway). (Linux client beta testing for X³ is soon)

My system had been working fine for years without issues, then I swapped the harddrive for a slightly bigger one, I basically reinstalled Windows then booted to the Ubuntu LiveCD, made the ext3 partition and rsynced my Linux install across and fixed the fstab/grub entries. Then my internet under Linux just stopped working, but of course not right away because that would be explainable, it worked for a couple of boots, enough time for me to download the almost 1Gb worth of crap I need to make my Windows install actually usable (250mb SP2, 50mb nVidia drivers, 23Mb SoundMax audio drivers – which where a pain to find because they show up as a Intel ICH8 82801H in Linux and you need the drivers that are specific to your manufacture even though it the same chipset on everything, Firefox, Antivirus, VideoLan, putty, DaemonTools, Pidgin, 7-Zip, etc…), and people think Linux is hard to install? :/

After all that had downloaded, I rebooted to Windows and installed it. Then going back to Linux, no internet connection

Network worked fine under Windows.
At first I though it was DHCP/DNS problems, but setting the ip address manually didn’t help, it wouldn’t bind the ADSL modem or server. I considered the possibility that the kernel had been updated and just broken horribly or some other related package, but I booted off the LiveCD which I know works fine, set the IP manually and it wouldn’t work. Considered the possibility that I had somehow screwed my connection to work for windows only, but my other computer ran fine.

Spent ages swapping cables in case the wire was broken and Linux was somehow more picky about faulty cables and rebooting everything I could. Disabled all the dhcp/dns server/clients in case some weird network voodoo was happening.

Then strangely I noticed that the light on the router wasn’t lit while under Linux, but only under Linux, rebooting to windows it would magically light up.

ethtool reported it as link not found, so it didn’t appear that there was some magic command to make the interface spring back into life (I had already tried ifconfig up etc…).

Vaguely remembered having a similar problem once before, years ago when I got the mobo.
Pulled out the power out from the back of the computer (I had turned it off and rebooted many times).
Plugged it back it.

Like magic everything is working again :/

At best guess, when I installed the windows NIC driver (after downloading it from the working Linux connection) it strewed up the interface somehow that is only a problem in Linux and only completely loosing power manages to fully reset it (It had been stuck like it for about 4 days, with my turning it off overnight), it would explain why I remember something similar happing when I forst got it since I would have needed to install the NIC drivers then too. For reference the mobo is an Asus P5B with a “03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01)”. [EDIT: Looks like the problem is the wake on lan feature but it doesn't happen every time I shutdown Windows like the article says it does, there does seem to be a solution though, http://gentoo-wiki.com/HARDWARE_RTL8168]

It also reminded me of several years before when I had a failing motherboard, firstly the CMOS kept loosing the setting even though I replaced the battery, so it required me to push F1 at boot to continue. But the USB keyboard wouldn’t work for some odd reason, so I tried a PS2 one. It worked but only if I pressed it very fast… until next reboot. Then the USB one worked, once again only if I pushed it very fast… then next reboot it was back to the PS2 and so on. So every boot I had the little ritual of furiously hitting the F1 keys simultaneously on two separate keyboard hoping that it would be detected (about ⅓ of the time it would need to be rebooted to try agian). once it got past POST everything including both keyboards worked without problems.





OpenWRT on D-Link DSL-G604t

8 07 2007

The default firmware for the router leaves a lot to be desired, dns is fairly painful with bugs such as IPv6 enabled systems returning 0.0.0.1 for IPv4 addresses (Its fixed in the newest Australia firmware release but I’m not sure about other countries). There was also a problem the system dying under heavy load such as BitTorrent or other P2P (also fixed) but there doesn’t seem to be anyway to link hostnames. static ips and MAC addresses together, there are some options such as reserved hosts which map MAC to ip, and another section that maps hostname to ip but these seem to override each other and the IP only seems to be for DHCP not DNS, You could manually edit /etc/hosts on the modem which will be wiped each reboot.

There is a good information on the router at Seattle wireless, including hardware info and the ADAM2 bootloader.

Firstly you need to find the IP address of the routers ADAM2 bootloader.

If you have a working router, you can telnet into it and cat /proc/ticfg/env

Its a good idea to save this information for later as things like the mtd layouts are handy to have. You want to find the ip address entry. There are a few defaults, 10.8.8.8 is common for the ones in Australia, mine is 169.254.87.1, which I believe is a failsafe address that probably started to be used when I accidentally wrote over the environment variables part of the mtd, fortunately its fairly bullet prof providing you don’t kill mtd2 (0×0 to 0×90010000) which is where the adam2 boot loader lives.

You need to grab the source code for OpenWrt from svn as there currently aren’t any ar7 releases.

There is a guide on the OpenWRT wiki, its important to read as I will be glossing over the details here.

Compiling is fairly painless, ‘make menuconfig’, choose options as shown in the guide and then ‘make’. This will make images in the bin directory. The only one thats important is the openwrt-ar7-2.4-squashfs.img file.

The guide recommends you checksum the image using TI’s GPL code. I;m not sure if this is actually required or not but I did it to be safe.

There is a adam2flash.pl file in scripts, however it doesn’t seem to be able to find the device, I tried it along time ago with the 2.4 OpenWRT and had a similar problem, manually modified the script to fix the error (i forget how) but then the script refused to flash as the mtd’s where different on the new Australian firmware.

Instead you need to manually upload the firmware with the ADAM2 ftp.

Firstly set your computers ip address to the same subnet as the ADAM2′s ip. Then ftp to the ip address and quickly unplug and replug the modem (your supposed to wait 10 seconds but under windows the interface will go down if the router is off for too long and ftp won’t work.

You should get the ADAM2 login screen, login with ‘adam2′ as user name and password.

You can see the mtd information with:
quote "GETENV mtd0"

Its a good idea to check them all. You will need to modify them for the firmware but don’t touch mtd2 as thats where the boot loader is and screwing it up will brick the router unless you happen to have a JTAG. Its a good idea to workout the partition ordering, its [mtd2]-[mtd1]-[mtd0]-[mtd3] with [mtd4] spanning across the memory of [mtd1] and [mtd0] combined. mtd2=bootloader, mtd1=kernel, mtd2=filesystem and mtd3=adam2-enviroment-variables (nuking this will make adam2 goto the failsafe defaults which will mean you might need to find the failsafe ip address).

You can then use quote "SETENV mtd4, 0x90010000,0x903f0000".

You need to open the openwrt-ar2-2.4-squashfs.img file in a hexeditor and find the memory location that contains the code ‘hsqs’, then add this to the location where the adam2 bootloader ends (0×90010000). This location is where the compressed filesystem is kept. From above you can see that the compressed fs is at mtd0, so the resulting number is where mtd1 ends and mtd0 starts. The rest of the coordinates are unchanged from the defaults.

You can then upload the image
binary
hash
debug
put openwrt.img "openwrt.img mtd4"
quote REBOOT

If ftp dies when attempting the put command, simply reconnect. You should see a bunch of # on the screen thanks to the hash command, so you can see everything is working well.

When you reboot, the modem should come up with the ip address 192.168.1.1, it might take quite along time to work though. I sugest you set your ip to an address on the subnet and leave ping running. When it responds to a ping it will still take quite a while for the servers to come up, you can keep trying to telnet to 192.168.1.1.

Since this is the first boot, it needs to generate a ssh key. You can see if its still generating the key with ‘top’, it should be a dropbearkey process using a lot of cpu also a firstboot process should be around. You also should set your password with ‘passwd’ to allow ssh and stop anyone logging in.

When you have done that and the key has been generated you can ‘reboot’.

Once again it will take several min to come back up.

Next edit /etc/ethers with vim and you static MAC entries in the form of:
00:AA:AA:AA:AA:AA 192.168.1.2

And edit /etc/hosts for the hostnames of the ip addresses.

Then you need rm /etc/resolv.conf as it is a link to an automatically generated file in /tmp. You need to add your isps nameservers to it. Also you need to add them to /etc/resolv.conf.auto so dnsmasq checks them for clients on the lan.

The TI ACX111 wireless interface needs firmware to be wget’d to /lib/firmware check ‘dmesg | grep acx’ to find out which files are actually needed, mine wanted tiacx111 and tiacx111c16.

You can either reboot or ‘rmmod acx; insmod acx’ to bring up the interface

I have yet to figure out how to get the wireless lan working as an access point, for some reason the /etc/config/wireless only works for 2 chipsets. iwconfig will work to bring up the wireless interface manually and it shows up on a scan but I was unable to authenticate with the access point, I’m not sure if its because there is more setup required or if the drivers are broken although its probably a config problem. For not I have gone back to official firmware but turned of the DNS/DHCP servers and setup a proper one on another system untill the OpenWRT branch is updated to support acx111 in the wireless configuration.





code, code.back, code.back2… – A better way with Revison Control (svn/git/bzr/hg tutorials & comparisons)

18 06 2007

This article is very long, it covers some basics of what revision control system (RCS)/ source code management systems (SCMS) are, basic tutorial of using subversion for a personal repository, what distributed ones are, basics of using git, bzr and hg for a personal repository and my comparisons on them. Its only a basic introduction, I’ve never had to manage any large complex projects so advanced stuff isn’t covered (plus its long enough).

If you program and don’t use some kind of rcs you are making your life much harder than it should be, rcs are a great, distributed ones are greatest. All you need is to learn a few steps to setup a repo, and somewhere to put it, anything with ssh can be used or just on the local disk.

Even for non-programmers, if you find yourself making changes to config files much then having a repository containing them is definitely a good idea, if you botch it up, you can always revert to the previous edit and compare the 2 with diff.

    Introduction to RCS


Originally when I would code, I would intermittently ‘cp -rf directory directory.backup’, that way if I screwed up my code I could always go back. This was working fine for my smaller projects, at least until one particularly painful Uni assignment (SunRPC will segfault on anything), eventually I had reached backup.22 and often I had to go back a few revisions, Not an easy task because I wouldn’t remember the exact number, and I had often done more than one change to the code like add comments to everything, which resulted in me creating more backups with things like a single comment added, because the code I had done had started to randomly segfault. I’m sure there was a simple memory leak but with the deadline a few hours away I didn’t have time to hunt it down (basic gdb wasn’t working because it was the SunRPC libs that where crashing). In the end I got my assignment in (although it was probably the worst mark on an assignment yet, once again I hate RPC).

After that I decided to try using a revision control system, previously I had never actually though about using them for my simple coding and just assumed they where only needed for larger project, the only time I had encountered them was to ocassionaly grab some code from when I needed something newer than was shipping with my Linux distribution, however while googling for stuff about uni I managed to find this website from another student about setting up SVN for projects. I had only previously used svn for grabbing code from public projects, I had also used cvs although it was fairly clear than cvs was a fairly outdated system.

    SVN – Subversion tutoiral


SVN works rather well for me as it is on the systems at uni and can be tunneled over ssh so I can push/pull to/from my server at home. The basic functionality of svn allows for going back to any previous revision with ‘-r #’, coding from any system that can connect to the repository all I need to do is checkout/update it, seeing a ‘diff’ between revisions to see what I changed.

Unfortunately subversion isn’t distributed (explained later) so I wouldn’t recommend it, but understanding the basics of revision control is important, so I have instructions on using it here, the same basic outline of commands is used for most of the revision control systems around with a few minor differences. I might use svn for basic repository for editing config files but any of the distributed ones would work just as well.

You can use any system you either have direct access to or ssh (and http etc…) to store your code, I’m using ssh in this.

SourceForge (from the owners of Slashdot) provide free public svn (and cvs) hosting for open source projects, include bug tracking and basic forums however I haven’t found the site very nice to navigate, although you can just host a normal website on it.

Setting up a personal local repository is easy.

Firstly we need to make a svn folder where all the other svn projects will live:
mkdir ~/svn

Then we need to make a repository for the project:
svnadmin create ~/svn/PROJECTNAME

Next is importing the current code, you do this from the directory where your code lives not the svn created one, make sure you clean up any unneeded files like binaries and generated output first:
svn import . \
file:///home/USERNAME/svn/projectname/trunk -m "Initial commit."

Notice that it is going into the sub folder trunk, this is important because later on you might need to tag code so you might end up with /trunk/, /1.0rc1/ and /1.0/, you can just put the code in the main directory if you don’t want this kind of functionality. Make sure there are 3 /’s in the uri, normally the server name goes after the first / but since this is local there aren’t any. You must also specify the full path to your folder. -m is the commit message that describes the changes for revisions.

You can also use svn+ssh://USERNAME@SERVER/home/USERNAME/svn/PROJECTNAME/trunk if you want to do it over ssh.

The next set is to checkout your repository, even though you have a local copy you still need the subversion metadata (Annoying url prefix, I wish it was just ssh://):
svn checkout \
svn+ssh://USERNAME@SERVER/home/USERNAME/svn/PROJECTNAME/trunk \
PROJECTNAME

This time I’m doing it over ssh, once again remember that its coming from the trunk folder. The trailing PROJECTNAME is to make svn rename. co is a shorter alias of checkout if your excessively lazy.

Thats the hard bits done, from now on its very simple as all the information about where to upload is stored in the .svn folder in your project.
Now you just edit your code, and once your happy with the changes you type:
svn commit -m "Description of changes."

When you create a new file that you want to add to the repository you must first tell svn that you want to add it manually, this avoids accidentally uploading compiled binary files or files outputted by your program:
svn add filename

To update to the version of the code in the repository (or a particular version with -r#):
svn update

To see the difference between revisions, you can also specify a particular revision with -r:
svn diff

To see the logs:
svn log

To make a tag:
svn copy \
svn+ssh://USERNAME@SERVER/home/USERNAME/svn/PROJECTNAME/trunk \ svn+ssh://USERNAME@SERVER/home/USERNAME/svn/PROJECTNAME/1.0

    Distributed Revision Control Systems


SVN was a massive improvement to managing even simple personal code, I used it for several months without issues, however there is a new bread of RCS that are appearing, distributed ones. There are currently 3 main contenders:

  • Git – Made by Linus for maintaining the Linux kernel, also used by KDE.
  • Mercurial - Run with the command ‘hg’, A popular Python based one, used by Sun (for hosting of Java).
  • Bazaar-NG – Or ‘bzr’ Python again, as used on launchpad.net and the Ubuntu community (there all Canonical made).
  • There is also darcs (written in Haskell), GNU Arch, and monotone. There is a Wikipedia article listing various revision control software (commerical/free and central/distributed).

    Being distributed means that when you check out a central repository, you actually have your own local repository rather than just a copy of the code from it, so you can commit changes without having access to the central repository. Allows for much easier experimentation as you can quickly branch off from your local repository and its Useful for people with laptops who might not have an internet connection. With subversion, you can checkout a repository but then your stuck with the one version, you can only commit back to the main repository, the most you could do is try to copy the directory and other painful workarounds. Also there isn’t technically a ‘central’ repository, although there will generally an official one everyone downloads from. Still handy features to have even when its just for personal use, for instance a simple ‘svn log’ needs to talk to the central server, which can take some time if its a large repo and/or is over a slow connection.

    Speed wise Git is currently the fastest for most operations as it was designed for maintaining the massive Linux kernel. Next fastest is Mercurial and then Bazaar (which is planning to match git speeds in its 1.0 release). However for most simple projects speed isn’t that much of a requirement, as long as its not tediously slow for simple changes any of them should work fine.

    The functionality of all of these are fairly similar, you tell it who you are, you init the original source directory, commit the initial repository, then you can checkout from anywhere with access, branch off code, modify code, commit it, merge it back into the master branch, push it to the server. Review Logs, see changes with diffs etc…

    Most of these support the ability of checking out code from repositories of a different type, you might need a plugin though. You can also convert between systems with tailor, although you might loose some information.

    In the end its probably just personal choice which one you prefer as they all offer the same basic functionality.

      DRCS Tutorials


      Git


    Firstly there is a great talk from Linus about Git on Google video, its 1hr 10min long. It might be somewhat dated however, some of the functionality talked about might have been implemented or speedup since then (for instance pushing in git now exists).

    Git is written in c and is currently the fastest. It is probably best suited for larger projects. However some of Git is more advanced features are a bit harder to use and understand although not by too much for basic usage, so it might not be suited for the less experienced user. The speed improvements on Git are apparently lost on Windows systems as they rely on some specific methods of disk access (unless this has been fixed in newer versions). So Windows or multisystem developers might want to avoid it.

    If you want, you can get free public Git hosting here, although its only a very basic service currently.
    UPDATE: There is also github which has a free opensource developer plan (100mb, no private repos).

    A nice thing about Git is that it keeps all your branches in the same folder, with bzr/hg when you branch of it creates a separate folder for that branch, you could keep them all in one main project folder (For bazaar you can create a repository that stores all your branches saving space by sharing common files) but with Git everything is in the one folder by default making for a much tidier feel, branches you aren’t working on are tucked away and you switch between them fairly painlessly with the checkout command. Might require a bit more effort to work between 2 branches however.

    Git also has nice sha-1 ids for everything so you can tell if things become corrupt, and it generally views all your code as one thing rather than each file so it can track changes to a function even if its moved from one file to another.

    You can ‘apt-get install git-core’ on Ubuntu/Debian, however its out of date so the instructions will vary. You can get the code from the site compile from source for a newer version.

    Firstly tell Git who you are (and enable prwdy colours), the following for newer version of Git:
    git config --global user.name "YOURNAME"
    git config --global user.email EMAIL@DOMAIN.com
    git config --global color.diff auto
    git config --global color.status auto
    git config --global color.branch auto

    Note that those are all –config, not -confg, wordpress screws it up

    To initialize the current code directory (older versions use ‘git-init-db’):
    git init

    When committing to Git, you need to maintain an index of files that are to be committed, you can use the ‘add’ command to do this, in svn you only need to add new files to but in git you need to also add changed files, however rather than adding changed files manually you can use ‘commit -a’ which will automatically add the changed files to the index (but not newly created ones). Since all your files are new in this initial import you need to add them:
    git add .

    Then commit them:
    git commit

    When you want to grab your code from a remote repository and put in in the current directory, use:
    git clone ssh://SERVER/home/USERNAME/git/PROJECTNAME

    Enter your directory, you can then make a branch for hacking on:
    git branch BRANCHNAME

    View your list of branches:
    git branch

    Then you switch to that branch:
    git checkout BRANCHNAME

    Modify some code and check it into your local BRANCHNAME branch:
    git commit -a

    Switch back to your original local branch:
    git checkout master

    Merge the changes into the master branch:
    git merge BRANCHNAME

    Delete the extra branch (-D will force it to delete if you didn’t merge it):
    git branch -d BRANCHNAME

    Push the branch to your server:
    git push ssh://USERNAME@SERVER/home/USERNAME/git/PROJECTNAME

    Theres some more tutorial information on Git here.

      Bazaar – (bzr)


    Bazaar written in python is probably the slowest of the 3, however the current project roadmap for 1.0 is to match the speed of git, so there might be some improvements appearing. There are benchmarks here showing much better speed improvements, up to 0.15, no 0.16/0.17 which also list more performance improvements in their changelogs. I haven’t found any videos on Bazaar but there have been three, shuttleworth, posts recently on bazaar as a lossless RCS.

    For public Bazaar hosting there is launchpad, which has bug tracking and such for project, and storing personal user branches.

    Bazaar seems fairly simple to use, I haven’t needed any of the more advanced features but it seems like advanced stuff would be simpler under Bazaar than Git, but for the simple stuff there isn’t any major difference.

    Firstly set your name:
    bzr whoami "Your Name <EMAIL@DOMAIN.com>"

    Enter your source code directory and initialize it:
    bzr init

    Add the files to the index:
    bzr add .

    Commit the branch. this same command is also used to commit code after its modified, by default it will add all changed files to the index, like -a in git:
    bzr commit

    You can create a repository to store branches, this allows you to save space by sharing the common files between them.
    bzr init-repo REPONAME
    cd REPONAME

    Now you can branch off from your remote branch into the local repository, notice its sftp for ssh now, a different standard for the same thing again, you can use ~ for the home folder now though, there is also bzr+ssh:// which doesn’t seem to need the paramiko library but i’m not sure of the difference between them other than that:
    bzr clone sftp://USERNAME@SERVER/~/bzr/PROJECTNAME

    In addition to ‘clone’, you can also use ‘checkout’, this means that any changes you commit, as well as being committed to the local branch will also be committed to the branch you checkout from, if possible. This is somewhat similar to svn, except changes are still committed to the local branch regardless of the remote branch being accessible (unless you use –lightweight, in which case it works just like svn and all everything depends on the remote branch working). You can also use checkout inside a branch to obtain the latest committed version of that branch into the working directory which is sometimes needed if you push branches as it will transfer the .bzr directory with the revisions but not the working branch.

    You can fork of from your local branch for experimental coding, which will make a separate folder in the repository:
    bzr clone PROJECTNAME PROJECTNAME-testcode

    Then after coding, change to the main local branch directory and merge:
    bzr merge ../PROJECTNAME-testcode

    Then you can push the local branch back to your servers branch:
    bzr push sftp://USERNAME@SERVER/~/bzr/PROJECTNAME

    Also see the official Bazaar tutorial.

      Mercurial – (hg)


    Mercurial works basically the same as bazaar. Theres a google video tech talk on it here (50min).

    Thus you must firstly identify thyself:
    echo -e "[ui]\nusername = YOUR NAME <EMAIL@DOMAIN.com>" > ~/.hgrc

    Changeth to thy source code directory and initilizeth with:
    hg init

    Addeth ye new files to thy index:
    hg add

    Commiteth thy files to thy repo:
    hg commit

    Snag your remote repo to a local location:
    hg clone ssh://USERNAME@SERVER/~/hg/PROJECTNAME

    Branch off your local main to a secondary branch:
    hg clone PROJECTNAME PROJECTNAME-testcode

    Modify some code, and commit to the secondary branch with:
    hg commit

    Change back to your primary local branch and merge (this needs 2 commands):
    hg pull ../PROJECTNAME-testcode
    hg update

    Push it to your remote repo:
    hg push ssh://USERNAME@SERVER/~/hg/PROJECTNAME

    Official Mercurial Tutorial.

      Finally


    After trying out all 3, I found them to be vary similar to each other and any would be suitable for most purposes, you could probally pick one at random and be happy or choose one based on the public services that are avilable such as launchpad, I will probably end up using bzr, hg seemed to make merging a bit more of a pain, requiring an extra step, and the ‘merge’ command some how changed from the docs to the ‘update’ command, also the aesthetics of the output wasn’t as good but thats a bit nitpicky. Bazaars rapidly improving speed should see it ahead of hg if they meet their goals. I also liked git quite alot and might use that for some stuff but it isn’t available on the Solaris systems at uni, and requires 22mb just for the basic binaries so to much for me to install locally (50mb directory limit), but I do favor the approach of having all the branches in the one local location rather than making a whole new one each time, cuts down on the appearance of clutter.

    If you are looking for public hosting for your code with a repository of your choice, you can check this wikipedia article which shows a handy list of hosts and what systems they support.





    ZFS on Linux – Freedom can be so restrictive

    9 06 2007

    UPDATE2: Back in May, there was a post on Jeff Bonwik‘s (Lead ZFS developer) blog with pictures of him and Linus having lunch, they where linked to from Jim Grisanzio’s (Another Sun employee) blog with the title of “ZFS Pics“.

    There is also some work on developing a new Linux filesystem, btrfs with many of the ZFS features. “the filesystem format has support for some advanced features that are designed to leapfrog ZFS”.

    UPDATE: There has recently been some talk on the kernel development mailing list about GPLv2, GPLv3 and Solaris, including ZFS. Linus’s post, skeptical about Sun cooperating and Sun’s CEO reply saying “if it was, we wouldn’t be so interested in seeing ZFS everywhere, including Linux, with full patent indemnity.”.

    ZFS is a great file system from Sun, currently its going to be the default for the file system in OSX Leopard when its released (apparently its read-only) and its already in the FreeBSD kernel. And of course Suns operating system Solaris.

    Grub boot loader already allows for booting from it.

    Sun claim it to be the last word in filesystems. Apparently speed wise its close to hard drive platter speed like XFS, handles software raid like LVM and is able to handle more storage capacity than anyone should ever needed like ext4, supports Compression, snapshots and encryption its being worked on.

    There is ZFS on FUSE that allows you to use it on Linux, but FUSE is slower than a real file systems (Benchmarks here) and it is much harder have the main root partition on it as it must load the programs that access the hard drive from somewhere. dpkg also requires a patch for systems using Debian apt.

    Unfortunately there are 2 problems with to getting it into the core Linux kernel.

    Licensing and Patents.

    Currently OpenSolaris is under Sun’s CDDL which is incompatible with the GPL license that the Linux kernel uses. Sun have been talking about GPLing Solaris with the GPLv3. Would this mean we could see ZFS in Linux? Unfortunately no, the Linux kernel is under the GPLv2, with Linus previous saying that he would probably stick to GPLv2 for the Linux kernel, although he did recently say he was ‘pretty pleased’ about the new draft but still skeptical. The GPLv2 says that there must be no restrictions on how the software is used, the GPLv3 says you must not use it with DRM or on hardware the deliberately prevents the modification of the software (ie Tivo). Some parts of ZFS are under the GPLv2 via grub, but only the very basic bits needed for booting so probably not enough to use on a system.

    The other problem is Sun apparently have 56 patents on the technology that goes into ZFS. If its under a compatible license with the Linux kernel, then this could still prevent wide spread adoption of it in the Linux community. Its theoretically possible that Sun is secretly being payed by MS to get their code into the kernel and sue em although it seems a bit to tinfoil hat to me. Sun apparently won’t sue anyone using their codebase, but I’m not sure how legally binding that is. It also prevents reverse engineering ZFS from scratch.

    Sun have recently been making an attempt at getting the Linux community involved with Solaris, recently recruiting an ex-Debian developer Ian Murdock who’s job it is to make Solaris more appealing to the Linux user with project Indiana (mailinglist), a binary based Solaris distribution designed to be what people expect from Linux, it is possible that we could see Sun releasing ZFS in such a way that the Linux community can make use of it as a show of good faith. But its also possible that they will keep it as bait in an attempt to sway Linux developers to their side. Sun have been fairly good with the free software community of late, releasing Java under the GPLv2 (At least the bits they could), but it might be viewed as an attempt at keeping Java in play since C# and Flash have taken a large chunk out of the area.

    We could also see few GNU/Linux distributions switch to GNU/Solaris ones if/when/how Solaris is GPL’d, we could see Ubuntu Solaris one day, it being under the newer GPLv3 license could make it the free software OS of choice (well maybe it would still be HURD because of its microkernel, but that doesn’t seem to be usable yet after almost 20 years of development), there already is Nexenta which is a GNU/Solaris distribution similar to Ubuntu. I tried the version that shipped with the OpenSolaris demonstration pack (they ship’em to you free here, like Ubuntu does here), it includes a bunch of Solaris versions on 2 dvds, the case smelled funny). It looked fairly nice for an Alpha, although it didn’t detect my networking or sound, the newer Developer Solaris on the same cd had better hardware support so Nexenta might just need a newer kernel version (They have already release an alpha7 and CP with ZFS boot support, but I haven’t tried them just yet).

    I hope to see ZFS in the Linux kernel, every time its brought up in discussions it generally goes: ZFS is cool, I want it, Theres a FUSE version, FUSE is slow i want it for real, Linux carn’t have it because of CDDL, its really the patents the are the problem.

    Hopefully someone will eventually code it, just ignoring the patent issues for peoples personal use and distros will could start to include it when Solaris gets GPL’d or Sun will make some statement about it since it seems to be the most commented issue on ZFS.

    http://en.wikipedia.org/wiki/ZFS
    http://en.wikipedia.org/wiki/GNU_General_Public_License#Version_3
    http://kerneltrap.org/node/8066
    http://www.zdnet.com.au/news/software/soa/Sun-looks-to-GPL-v3-for-Java-Solaris/0,130061733,339273561,00.htm





    Ubuntu Feisty is out!

    19 04 2007

    Just a minute ago:

    http://releases.ubuntu.com/7.04/

    http://mirrors.cat.pdx.edu/ubuntu-iso/feisty/

    EDIT: Those where the correct Feisty ISOs, http://www.ubuntu.com/ links to them with the same md5sums

    EDIT: Its still up in the air if this is actually the final release or not.

    The iso that is up now is in the directory where you would expect to find a final release and it is being mirrored to other servers.

    The md5sums are the same from the 15th livecd and the timestamp is also the 15th (except for the torrent which is the 19th but the tracker is down), this would happen if there weren’t any changes from the 15th build because it worked fine.

    Ops in the #ubuntu channels keep telling everyone that its not ‘released’ yet, saying wait for the official announcement, but if they mean that it hasn’t been ‘official decreed’ from the powers above so its technically not released or if they mean that the file in the iso directory isn’t the correct version. It’s also possible that the file is probably the final, but there is a slight chance of it being changed right at the last minute.

    I can’t really see why its in the folder if its not the final release unless there is a mirror system the can handle binary differences between the isos, so that putting up an out of date version will mean that the mirror only have to download the small difference allowing them to get it quicker.

    So it may or may not be the final release, I’ve got it downloading, I’ll check the md5sums when its actually released to see if it changes or not.





    Linux/UNIX Permissions

    17 04 2007

    I’m writing this because people sometimes seems to have trouble understanding the permissions under unix and get confused with setting permissions like 777 or 555.

    Permissions
    Basically every file and directory has a username and group associated with it. It then has 3 sets of permissions for owner and group and other.

    root user and the owner of the file have full control as to what permissions can be set, owner can for instance can remove roots permissions to read it but this can be overridden by root. But it is important to remember that by default root would be blocked for accessing the file until it is overridden so things like auto cleaning scripts to remove files from a directory could be bypassed if they aren’t coded to correctly ignore permissions, this can be an advantage if you want to allow users to be able do so.

    The 3 sets of permissions each consist of 3 settings, Read, Write and eXecute which being binary are either on or off.

    For files what each of these does is fairly obvious:

  • Read allows you to read data from the file
  • Write allows to the modify the file (including deleting it)
  • execute chooses weather you can run the file, the execute bit isn’t secure as it would be possible to use another program to execute the file regardless of whether the execute bit is set such as using sh to call a shell script directly so you would need a fairly heavily locked down system until you can be %100 sure that a file with the execute bit disable won’t be executable by someone who is going out of their way to do so. You also need to be able to read a file in order to execute it.
  • Extra info: Execute can also be set to be S instead of x, this allows the executed program to be run with the permissions of the owner of the program, rather than the permissions of the user running it. This can be a very bit security rist.
  • Directories are a little bit different:

  • the execute bit decides weather you can enter the directory so you can’t ‘cd /directory’ into the directory but you can ‘ls /directoy’.
  • Read is used to determine if you can list the contents of the directory so you can block the ability to use ls to list the contents but still allow a user to enter the directory with cd by allowing execute, is it is possible to create a file in a directory that can be accessed by specifying the full path name without being able to browse the directory itself
  • Write allows you to create files in a directory, and also delete/rename the directory itself and files inside the directory (regardless of owner). You can write to a file without having read access
  • Extra into: Write access leads to a problem, user can delete/rename the directory itself or files that aren’t theirs, if you want a directory that users can create files in but you want to stop them from deleting it, such as /tmp this is solved by having an extra bit called the sticky bit (+t), t only shows up for the ‘other’ user since the owner and root are expected to beable to delete their own directory. If the /tmp is missing the sticky bit then a user can cause havock with the system by deleting the tmp directory that is require for a lot of programs. Files can also have the sticky bit but it is ignored nowdays, it was designed to allow the files to ‘stick’ in memory.

    STICKY FILES
    On older Unix systems, the sticky bit caused executable files to be
    hoarded in swap space. This feature is not useful on modern VM sys‐
    tems, and the Linux kernel ignores the sticky bit on files. Other ker‐
    nels may use the sticky bit on files for system-defined purposes. On
    some systems, only the superuser can set the sticky bit on files.

    STICKY DIRECTORIES
    When the sticky bit is set on a directory, files in that directory may
    be unlinked or renamed only by the directory owner as well as by root
    or the file owner. Without the sticky bit, anyone able to write to the
    directory can delete or rename files. The sticky bit is commonly found
    on directories, such as /tmp, that are world-writable.

  • Permissions as shown in a ls:
    U G O ref user grp size date time name
    drwxrwxrwt 13 root root 16K 2007-04-17 23:09 tmp

    The ‘d’ stands for directory, for normal files this will be a ‘-’. Next we have the 3 permissions for owning user (root) ‘rwx’, then the 3 for group (root) ‘rwx’ and then the 3 for other users ‘rwt’. So root and owner have full permissions (in this case the owner is also root) but all other users have almost full permissions but cannot modify the directory itself.

    Then there is a counter, this isn’t important to under stand by it counts how many times that file/directory is referenced when it is 0 the file system will consider that space to be free space and it will be used for any new files created, for normal files this is normally 1, unless that file has been hard linked. for directors this changes depending on the number of subdirectories it contains, since each sub directory has a link back to the parent directory in the form of ‘..’, a directory without sub directories has 2, one for the parent directories link and another for the directories link to itself.

    We then have the username (root) which is associated with the owner permissions and then the group (also root) associated with the groups permissions. Then the time and date.

    Groups
    Users have a primary group but can belong to multiple supplemental groups. This is defined in the /etc/group file. This was its possible to have a file that one user can modify as their own, people in the same group as the file can read but not modify and everyone else is completely blocked. You can also use ‘usermod’ to change which groups a user belongs in. You can see what groups you are in with ‘id’

    For example:
    usermod -ag newgroup username
    The -a tells usermod to append the groups, without it any groups the user is in would be removed if they weren’t specified. -g is for secondary groups, normally these are all you only need to change.

    Mount points
    It is a good idea to have no permissions enabled for unmounted mount points such as /mnt/cdrom, you can then set another set of permissions for when it is mounted which will automatically be applied each time that file system is mounted. If you want regular uses to be able to mount something that is set in /etc/fstab not on the mount point permissions. Doing this will give users a permission denied error if they try to access an unmounted directory, rather than just getting an empty directory.

    letter mode vs octal letter permissions
    Often you will see chmod commands with number such as ‘chmod 750 /tmp/somefile’ these are permissions in octal mode (octal because there are 8 choices, 0-7), there is one number is for user,group and other. The numbers are a combination of the different permissions, each permission type is assigned a value, Execute is 1, write is 2 and read is 4, these numbers can them be added together to get a permission, such as 5 which is read and execute, or 7 which is full permissions. Sometimes there is a 4th number than is for the extra bits such as sticky and sudo. ‘man chmod’ for more information.

    If you don’t like the number system you can use the easier to remember letter system.
    Such as ‘chmod ugo+rwx’ which gives user, group and other full permissions.

    Setting permissions en mass
    You might want to set all files in a directory to one set of permissions such as 644, to allow user read and write, but everyone else read only. This can be done with ‘chmod -R 644 /directory’ but it has a problem, if you have sub directories and set these permissions the sub directories users will not be able to enter them because they need execute access. You can fix this with the command ‘chmod -R ugo+X /directory’, the capital X tells chmod to only apply executable bit on directories.








    Follow

    Get every new post delivered to your Inbox.