Software dev, tech, mind hacks and the occasional personal bit

Author: James Page 4 of 20

Talk tonight: Responsive Layout with HTML5

I’ll be giving a talk at Sydney ALT.NET tonight:

Want to build a web application which dynamically changes layout to best suit the client, be it mobile, tablet or desktop with the same HTML? Fun times with HTML5, Bootstrap, HAML and Sass. You’ll get to see it in action, and the code behind the magic.

From 6pm at ThoughtWorks Sydney office on Pitt St. Remember to RSVP on the Sydney ALT.NET site to help with catering. See you there!

VPS Performance Comparison and Benchmarking

What VPS / cloud instance provider gives you the best bang for buck for a small linux server? I’ve been focusing on evaluating providers at the low cost end, and reasonable latency to Australia – Linode Tokyo, Octopus Australia, RailsPlayground USA and Amazon Singapore. From the tech specs on a VPS, most providers look similar. However, when you have an instance, you often find huge variation in performance due to contention for CPU and disk on the node. For a simple test, I ran a small ruby script on each server every 20 minutes on cron, and logged the results to a CSV file over the same 5 days.

Here is the script:

start = Time.now

`man ls`
`ls /`

disk_time = Time.now - start

start = Time.now

(1..200000).each { |i| i*i/3.52345*i*i*i%34 }

cpu_time = Time.now - start

total = disk_time + cpu_time
puts "#{total},#{disk_time},#{cpu_time},#{start.strftime('%Y-%m-%d')},#{start.strftime('%H:%M:%S')}"

It uses ls and man to test wait for disk and then some made up maths to test the CPU. It’s not perfect but gives a good indication of performance.

Results
All times are in seconds, and you can download the full data to get a better picture. I’ve provided a summary and notes for each provider below.

Linde 512mb Tokyo (Zen)

  Average Max
Total time 0.81 1.74
Disk 0.03 0.13
CPU 0.78 1.63

Full Results

This is my second Linode VPS. I asked to be moved to a different node as the first one started fast when I got it but performance degraded (due to contention on the node?) within a few days. Nothing else is running on the VPS besides the script. Overall this is consistent, good performance in both CPU and disk. Latency to this VPS is around ~130ms from my ADSL in Sydney.

Octopus 650mb Australia (VMWare)

  Average Max
Total time 0.74 3.40
Disk 0.19 2.88
CPU 0.54 1.08

Full Results

Octopus is running a site for me with a significant cron job on the hour. I’ve therefore removed results collected on the hour. Despite running my live site, Octopus has the fastest average performance of all VPS tested. The higher max time could have been caused by load on my site. Octopus costs a bit more but is hosted in Australia so has very low latency of around 20ms from my ADSL in Sydney.

Amazon Small Instance 1.7gb Singapore (Zen)

  Average Max
Total time 1.25 2.42
Disk 0.20 0.40
CPU 1.04 2.03

Full Results

Amazon EC2 Small instances are now in the same ballpark cost as small VPS instances, when you go for high usage reserved. Many people think that EC2 disk is poor. However, from my tests (and experience) it is not super fast, but it is consistent and reliable. Amazon is also very generous with memory and provides other benefits like free transfer to S3. Where it falls down is processor speed, which, although fairly consistent, is about 50% slower than Linode and Octopus. Latency from Sydney ADSL is around 120ms.

Rails Playground 640mb USA (Virtuozzo)

  Average Max
Total time 3.53 24.12
Disk 2.44 23.42
CPU 1.09 2.66

Full Results

My RailsPlayground VPS is running various services and sites for me but none of them are high load. As you can see, the CPU performance is similar to Amazon and doesn’t vary too much. The problem is disk which varies hugely and can sometimes lead to unresponsive sites and services while waiting for IO. Latency from Sydney ADSL is around 200ms.

Conclusion
If you want a fast VPS with low latency in Australia and are willing to pay a bit more than the other providers listed, Octopus VPS will serve you well.

For lots of memory but slower CPU, Amazon small instance will be good.

For faster CPU but less memory, Linode is your best choice.

It’s worth testing out any new VPS and keeping an eye on performance over time. Contention on your node could change dramatically without you being aware of it, dramatically impacting performance of your VPS.

Jetstar Review: Booking a holiday package

With some holiday leave coming up, my wife and I decided to go to Queensland for a warm break from the Sydney winter. After a little online research into Virgin and Jetstar packages, and booking directly, it was clear that going with a Jetstar package for the flight and hotel was several hundred dollars cheaper. The hotel on offer also looked quite good and only had availability for our dates when booked through Jetstar.

Tues 7 June
Went to the Jetstar site. The first time I tried entering our dates for the search, I got a message about the site being overloaded and please try again. I did so, and got to the next page, allowing me to customise flight times and choose accommodation options. The hotel we wanted to stay at had the same room type listed 3 times, at different prices. I had called Jetstar about this earlier in the day as I’d seen it, and been confused, when researching prices earlier. They had explained that it was the same room type – just if the cheapest price for the room type was already full, you would have to buy the next one up and so on. Following this advice, I chose the cheapest price for the room type offered.

I then tried to click the ‘Reviews and Information’ link which had worked earlier when I’d been doing the price comparison. This time, I got an exception message:


For the technical minded, you can see that the link from the package configuration page was faulty and did not include a ‘hotelId’ (bug?). Next, the hotel information page did not check the input parameters at all (bug and security risk) and exploded. Lastly, you can see that the Jetstar site has not been configured for production error messages. Instead, a developer error message was displayed which exposes implementation and technology details to the casual observer (security risk and poor user experience).

Somewhat concerned, I continued with the booking process. Choosing flights worked well, except I did wonder why one of the flight choices was listed as 4 hour duration, but left at 7am and arrived at 10am:


We next entered personal details for me and my wife. We had to try several times to submit the form as there must have been validation rules that our data was not meeting, but no error messages were displayed to help us. We fiddled around with the details we’d entered a bit (different formats for phone numbers etc), and then it finally worked.

Transitioning to the next page was very slow, but eventually came up with a chance to choose seats and travel insurance. My wife was sitting with me at the computer and we decided to choose seats. It was certainly not clear that we would have to pay extra for this. So I went ahead and chose some nice seats near the front of the plane (quick exit seats they were called) and then noticed that this would add an extra ~$64 on to the holiday cost! Realising our mistake, we chose regular seats a little further back. This was still an extra $16 on to the cost of the holiday. We tried everything we could think of to try and de-select our seat choices but could not find a way to do so that would actually stick (you could de-select temporarily but this could not be saved). Somewhat annoyed, we continued with the booking.

Next I entered my credit card details. It turned out there was an extra $30 booking and service charge that suddenly appeared at that point. Not happy but keen to finish the booking process, we next hit the pay button. The site took a while to respond and then reloaded the payment page, showing that there was a balance of $-907.40 with a proceed button below it. Quite a mystery. Had my credit card been charged? Had anything been booked? What did the mysterious $-907.40 mean?

I rang the call centre. Unfortunately it was either overloaded or down and I couldn’t even get to the “Press 1 for English…” prompt. I waited 15 minutes and rang again. I left the phone on speaker and went on with other things. I was on hold for 54 minutes before I gave up and went to bed.

Wed 8 June
In the morning, I range the call centre again, and got through in about 20 minutes to the reservations section. I explained the situation to the customer service person and she found my booking. It turned out that flights were booked, the hotel was not booked, and my itinerary was not sent. I explained which hotel was meant to be booked and she went to talk to her supervisor. I was on hold for a further 30 minutes or so with a few other brief questions during this time. She was able to send me the flight itinerary without the hotel which she did. She was unable to help with the hotel booking, but sent an internal request to the holidays section and said they would call me back within 24-48 hours to help with the hotel booking. I asked if I had any assurance that the hotel was booked. She said I didn’t have any assurance. I asked if there was any way to make this faster, as I needed to know if I should book my own accommodation. She sounded annoyed and said that there was nothing she could do, it was a different section of the business and she couldn’t help me further, just wait for the phone call in 24-48 hours. I did receive a copy of the itinerary for flights without hotel within 24 hours as promised. Meanwhile I had tweeted to @JetstarAirways about the experience so far, and they asked for my booking reference via DM to expedite things.

Thurs 9 June
Finally received a phone call from Jetstar holiday section. The guy said that he had looked into the case and sorted out the hotel booking. Great news! I asked if he could do anything about the $16 seat booking charge from earlier, but he said that it was clearly explained on the site, and he couldn’t do anything about it. He promised that the complete itinerary would arrive including the hotel booking within 24 hours and I gave him my email address (not sure why they didn’t have it from earlier, I’d already received the flight itinerary via email).

Fri 10 June
Waiting for itinerary. It did not arrive.

Sat 11 June
Wondering what had happened, I rang the call centre again. This time, I didn’t go on hold at all and got straight through to a helpful lady that confirmed my email address – it had been entered wrongly. That was probably why I hadn’t got an email yet. She fixed the typo in my email address, and I received the complete and correct itinerary within 10 minutes. The booking process was now complete!

Conclusion
A transaction I had expected to take about 20 minutes booking through the site took about 4 days from start to end, including about 2 hours of time on the phone to the call centre. The Jetstar price was still good compared with other options, but the unexpected charges of $16 for choosing seats and $30 for booking fee were a major turn off. The bugs in the site (especially the validation of personal details without showing error messages) must lose many potential customers. The failure of the purchase process which left the customer confused but charged without services being booked was a major issue. The call centre delays were a time waster and the silo-ed nature of the business between flights and holidays made solving problems slower and harder. Jetstar’s twitter account was potentially a help in resolving problems.

Post-Holiday Update
Happy to say that the flights and hotel worked well after all the booking was sorted out. Jetstar also upgraded us to a better room at the hotel which was a nice touch.

Automated Testing and the Test Pyramid

Why Do Automated Testing?
Before digging into a testing approach, lets talk about key reasons to do automated testing:

  • Rapid regression testing to allow systems/applications to continue to change and improve over time without long “testing” phases at the end of each development cycle
  • Finding defects and problems earlier and faster especially when tests can be run on developer machines, and as part of a build on a CI server
  • Ensure external integration points are working and continue to work as expected
  • Ensure the user can interact with the system as expected
  • Help debugging / writing / designing code
  • Help specify the behaviour of the system

Overall, with automated testing we are aiming for increased project delivery speed with built in quality.

Levels of Automated Tests
Automated tests come in a variety of different flavours, with different costs and benefits. They are sometimes called by different names by different people. For the purposes of clarity, let’s define the levels as follows:

Acceptance Tests
Highest level tests which treat the application as a black box. For an application with a user interface, they are tests which end up clicking buttons and entering text in fields. These tests are brittle (easily broken by intended user interface changes), give the most false negative breaks, expensive to write and expensive to maintain. But they can be written at a level of detail that is useful for business stakeholders to read, and are highly effective at ensuring the system is working from an end user perspective.

Integration Tests
Code which tests integration points for the system. They ensure that integration points are up and running, and that communication formats and channels are working as expected. Integration tests are generally fairly easy to write but maintenance can be time consuming when integration points are shared or under active development themselves. However, they are vital for ensuring that dependencies of the system you are building continue to work as you expect. These tests rarely give false negatives, run faster than acceptance tests but much slower than unit tests.

Unit Tests
Tests which are fine grained, extremely fast and test behaviour of a single class (or perhaps just a few related classes). They do not hit any external integration points. [This is not the most pure definition of unit tests, but for the purposes of this post, we do not need to break this category down further]. As unit tests are fast to write, easy to maintain (when they focus on behaviour rather than implementation) and run very quickly, they are effective for testing boundary conditions and all possible conditions and code branches.

Automated Test Strategy
An important part of test strategy is to decide the focus of each type of test and the testing mix. I’d generally recommend a testing mix with a majority of unit tests, some integration tests, and a small number of acceptance tests. In the test pyramid visualisation, I’ve included percentages of the number of tests, but this is just to give an idea of rough test mix breakdown.

So, why choose this sort of mix of tests? This mix allows you to cover alternative code paths and boundary conditions in tests that run very fast at the unit level. There are many combinations of these and they need to be easily maintained. Hence unit tests fit the bill nicely.

Integration tests are a slower and harder to maintain, so it’s better to have less of them, and target them specifically to cover off the risks of system integration points. It would be inefficient to test your system logic with integration tests, since they would be slow, and you would not know if the failure was from your code or from the integration point.

Finally, acceptance tests are the most expensive tests to write, run and maintain. This is why it is important to minimise the number of these, and push down as much testing as possible to lower levels. They give a great view of “Is the System working?”, but they are very inefficient for testing boundary conditions, all code paths etc. Generally, they are best for testing scenarios. For example, a scenario might be: Student logs in to the portal, chooses subjects for the semester in a multi-step wizard, then logs out. It would be best to avoid fine grained acceptance tests – they would cost more than the value they would add – ie, it would be better they had never been written. An example of such a test could be: Student chooses a subject and it is removed from the list of subjects available to be chosen. Or student enters an invalid subject code and is shown the error “Please choose a valid subject code”. Both of these would be best pushed down to a unit test level. If this is not possible, it would be best to include them in a larger scenario to avoid the set up overhead (eg, for the error message test set up, it may be necessary to log in and enter data to get to step 3 of wizard before you can check if an invalid subject code error message is displayed).

With acceptance tests, I’d recommend writing the minimum up front, just covering the happy paths and a few major unhappy paths. If defects come to light from exploratory testing, then discover how they slipped through the testing net. Why weren’t they covered by unit and integration tests, and could they be? If you have tried everything you can think of to improve lower levels of testing, and are still having too many defects creeping through, then consider upping your acceptance coverage. However, keep the cost/benefit analysis in mind. You want your tests to make your team go faster, and an overzealous acceptance suite can eat into the team’s time significantly with maintenance and much increased cost of change. Finally, keep in mind there is a manual testing option. Manual testing needs to be minimised, but if it is 3 minutes to test something manually (eg, checking something shows up in a minor external system) or a week to automate the test, you’re probably going to be better off keeping a manual test or two around.

Team Roles and Tests
Ideally it would be great to have developers who were interested in testing and the business, testers who knew how to code and the business context, and BAs who were interested in testing and coding too. Ie, a team of people who could do each role in a pinch, but were more specialised in a particular area. Unfortunately this is pretty rare outside of very small companies with small teams. In a highly differentiated team, with dedicated Developers, BAs and QAs, this generally ends up with developers writing and maintaining unit tests, doing a lot of integration tests and helping out with acceptance tests. QAs generally write some integration tests and look after writing and maintaining acceptance tests with help from developers. BAs are sometimes involved in the text side of writing acceptance tests.

English Language Text Files & BDD
There are many tools that bill themselves as Behaviour Driven Development (BDD) testing tools which are based on having features written in formulaic English, backed by regular expressions matching chunks of text then linked to procedural steps. Often using such a tools is considered BDD and a good thing in a blanket way.

Lets take a step back. BDD is about specifying behaviour, rather than implementation and encouraging collaboration between roles. All good things. BDD can be done in NUnit, JUnit, RSpec etc. There is no requirement in BDD that tests are written in formulaic English with a Given-When-Then format. The converse is also true – if you write tests which are about implementation using an English language BDD framework, you are not doing BDD.

Using an English language layer on top of normal code is expensive. It is slower to write tests (you need to edit two files every test, and get your matching regexes right), harder to debug, refactor and maintain (tracing test execution between feature text, regex and steps is time consuming and there’s less IDE support), and less scalable as suites grow large (most of these frameworks use steps which are isolated procedures with global variables for communication and no object oriented abstraction or packaging). Also for many people who are used to reading code, the English language layer is less concise and slower to read than code for specifying behaviour.

What do you get from an English language layer? It means that less technical people (eg, business sponsor and BAs) can easily read tests, and maybe just possibly, even write the English language half of tests.

It is worth carefully weighing up the costs and benefits in your situation before deciding if you want to fork out the extra development and maintenance cost for an English language layer. Unit tests and integration tests are not likely to pay dividends having an English language layer – non-technical people would not be reading or writing these tests. Acceptance tests are potentially the sweet spot. If your team’s business representative(s) or BA(s) are keen to write the English language side of the tests, while keeping in mind the granularity of a scenario style approach, you could be on to a winner. Writing these tests could really help bring the different roles on the team together and come to a common understanding.

On the other hand, if your team’s business sponsor(s) are too busy or not interested in sitting down writing tests in text files, and the BAs are not interested or have their hands full elsewhere, then there are few real benefits in having the English language layer and the same costs apply.

A middle ground is to generate readable reports from code based test frameworks. With this approach you get a lot of the benefits with much less cost. Business people and BAs cannot write tests unaided, but they can read what tests ran (a specification of the system) in clear English.

Traceability from Requirements
A concept that most commonly comes up at companies doing highly traditional SDLCs is traceability from requirements all the way to code and tests. Clearcase and some of the TFS tools are the children of this idea. Acceptance testing tools are often misused to automate far too many fine grained tests to attempt to prove that there is traceability from requirements.

A friend was explaining his area of expertise around proof of program correctness for safety critical systems where many peoples’ lives depend on a system. I totally agree that in this small subset of systems, it is vital to specify the system exactly and mathematically and prove it is correct in all cases.

However, the majority of business systems do not need to be specified in such detail or with such accuracy. Most business systems which people attempt to specify up front are written in natural language which is notorious for inexactitude and differing interpretations. As the system is written, many of these so called requirements change as the business changes, technical constraints are realised, and as people see the system working, they realise it should work differently.

Is traceability back to outdated requirements useful? Should people spend time updating these “requirements” to keep them in sync with the real system development? I would say a resounding NO. There is no point in this. If the system is already built, these artefacts have realised their value and their time is over. If documentation is required for other purposes such as supporting the system, then it is worth writing targeted documents with that audience and purpose in mind.

On the testing front, this traceability drive can lead to vast numbers of fine grained acceptance tests (eg, one for each requirement) being written in an English-language BDD test framework. This approach is naive and leads to the problems we have covered earlier.

Recent Experiences
On a project a couple of years ago which had a test square rather than a pyramid (hundreds of acceptance tests that ran in a browser), it took 2 people full time (usually a dev and a QA) to keep the acceptance suite up and running. Developers generally left updating the acceptance tests to the dedicated QA/Dev, as it took around an hour to run the full test suite, and had many false negative failures to chase up (was the failure due to an intended screen change, an integration point being down, or a real issue in the software?). The test suite was in Fitnesse for .NET, which was hard to debug and use, and had its own little wiki style version control that didn’t fit in well with the source version control system. The acceptance test suite was red the majority of the time due to false negatives, so it was hard to know if a build was really working or not. To get a green build would take luck and several tries nursing the build through. I would count this as a prime example of an anti-pattern in automated test strategy, where there were far too many fine grained acceptance tests which took far too much effort to write and maintain.

On my last 3 projects, taking the test pyramid approach, we had a small number of acceptance tests which were scenario based, covered the happy paths and a few unhappy paths. On these projects, tests ran in just a few minutes, so could be run on developer machines before check in. The maintenance overhead of updating tests could be included in the development of a story rather than requiring full time people dedicated to resolving issues. Acceptance builds had few false negatives and hence were much more reliable indicators. We also chose tools which were code/text-file based, and could be checked into version control with the rest of the source code. The resulting systems in production had low defect rates.

Conclusion
Some of the ideas in this post go against the prevailing testing fashions of the moment. However, rather than following fashion, let’s look at the costs and benefits of the ideas and tools we employ. The aim of automated testing is to increase throughput on the project (during growth and maintenance) while building in sufficient quality. Anything that costs more to build and maintain than the value it provides should be changed or dropped.

Rails Refactor is now a Gem!

Rails Refactor, a small command line tool to make rails refactoring more fun, is now available as a gem. To use:
gem install rails_refactor

More info available on github.

VIM is Sydney Rails Devs’ Favourite Editor

Outstanding news! As part of the rails refactor talk at RORO (Sydney Rails Group) tonight (great evening by the way!), I asked for a show of hands on people’s favoured editors. I was amazed to discover the vim has edged out TextMate with just over half of the people at the group using it as their editor of choice! As an aside, Netbeans had one supporter, RubyMine and Emacs had zero. The groundswell of support for vim (and the cheering) was impressive!

PS – this is a very nerdy post, but as a long time vim fan, I had to report on it 🙂

Short Talk on rails_refactor at Rails group

I’ll be giving a short talk with Ryan Bigg on Rails Refactor at the next Sydney Rails Group (RORO) meet (Tuesday, Feb 8 from 7pm) . We’ll be talking about Rails Refactor’s birth at a hack night last year, what it can do for you right now, and its bright future as your refactoring tool of choice. Hope to see you there.

Short Talk: Starting Android Development

I’ll be giving a short talk on Starting Android Development on Tuesday at the Sydney ALT.NET group.

We’ll be covering:

  • the platform
  • app design and abstractions
  • Java and IDEs for Android Dev
  • Emulator
  • Code walk through of a simple application I’m writing

Richard Banks (@rbanks54) will also be giving a talk on .NET bdd tools.

More info and RSVP on the ALT.NET blog.

See you there!

Migrating from Palm OS to Android

Palm and its Demise
I’ve been using Palm organisers and smart phones since the year 2000. I enjoyed developing for them, writing several medical applications, and using them extensively for calendaring, contacts and memos (PIM). The Treo smart phones were visionary at the time, providing integrated phone and PIM functionality, plus push email and basic web browsing.

I was at the lavish developer conference in Sydney, when Palm was the market leader, and announced they were splitting into two separate businesses – software and hardware, and developing a new OS (Cobalt), which never saw the light of day. After that, Palm slowly lost its lead. I would have been interested to try out Palm’s last throw – the Pre and WebOS – but it never made it to Australia. Now Palm has been purchased by HP, and it is abundantly clear that Palm has had its day, and it’s time to move on. Goodbye Treo 650 and Palm OS Garnet!

Where to next?
Well, major contenders for smart phones at this point are iOS and Android (sorry Windows Phone 7, maybe next release :)). The iPhone is a nicely crafted piece of consumer electronics, and it’s the obvious choice for many people. Personally, I like the polish, but find the limitations of the OS and clumsy notification system, vendor lock in and tightly controlled environment does not appeal. Android, especially with 2.2, is pretty smooth. It requires a lot more tweaking than an iPhone to get it to a good state, but once set up, it’s a really nice experience and lets you do quite a lot of stuff you can’t do on an iPhone. I chose a HTC Desire HD and Android 2.2.

Android Migration
What I particularly want to share is how I migrated my data across from Palm OS to Google services and Android 2.2, and what applications I chose to replace the beautifully crafted Palm PIM system. There’s still some Palm users out there hanging on, and I’d encourage you to take the leap and move over to Android.

Aims

  • Powerful calendar app on the Phone with hour by hour day view, easy and quick to add/change events, and ability to include additional public calendars. Synchronisation with a desktop application.
  • Contacts synced with Google mail / Google contacts and a desktop application.
  • Memos/Notes synced only with a desktop application (not cloud)
  • Push email

Apps & Architecture
After quite a bit of research and trial, I decided to go with:

  • Google services for Calendar and Contact storage in the cloud
  • Business Calendar for android calendar app. Uses Google services, day by day view and supports multiple calendars. It works pretty well though still in Beta, and has frequent updates and improvements.
  • Built in contact app from HTC. It syncs with Google contacts (links to facebook too) and works well with the phone app. It’s meant to sync with twitter too but HTC apps for twitter don’t seem to have been updated for new Twitter authentication system.
  • Outlook 2007 for the desktop PIM application (I prefer Palm desktop, but yes, no future there)
  • gSync to synchronise Outlook with Google services for Calendar and Contacts (this works pretty well, though not 100% reliably for things like deleting one occurrence of repeating events). I also set the synchronisation for the calendar to only be 100 days in the past and future as this made it a lot faster to sync)
  • B-Folders Android app and wireless sync to desktop for memos/notes (B-Folders works ok for this but is a bit clunky for editing notes on the phone and requires you to enter a password frequently)
  • Built in Gmail app works well for email and I use the Gmail web client with offline sync on my PC

Data Migration

  • First, sync data with Outlook. Only outlook <= 2003 is supported. I installed Outlook 2000 for the sync. To change from Palm desktop to Outlook for HotSync, on Windows, run PalmOne > PIM Conduit Sync > Sync with Outlook from the start menu. If you don’t have this app, you can download the latest version of Palm Desktop from the Palm site and it will include it.
  • I had a lot of errors during sync but managed through largely retrying to get a clean sync to happen.
  • After a clean sync, I upgraded my outlook to 2007 as this has a better user interface and works with gSync.
  • Next, I used gSync to sync about a year of past calendar data and all contacts into Google services. It works pretty well. Some calendar events seemed to get duplicated but not enough to be a major hassle. I did try syncing more years of history in Calendar with the cloud, but it seemed to slow down my Business Calendar start up time significantly, so I cleared out everything and only synced a much smaller length of time – about a year. I then changed the sync to only 100 days in the past and future to make it run faster (takes about 3 minutes). I currently have the sync run a few times a day automatically but sometimes kick it off manually too.
  • Use ‘Google contacts > More actions > Find & Merge Duplicates’ to clean up and combine your contacts. I had a lot of email addresses in Google contacts which also had contact records from the Palm. This command did a good job combining them.
  • I exported all memos using Palm Desktop into individual text files (one per category), and then imported them into B-Folders as per these instructions. I had to manually change line endings (\r\n to \n) to avoid double spacing. A few days later, a new version of B-Folders was released which can import all exported Palm memos from a single categorised file. I haven’t tried this but feature, but it sounds like a time saver! The B-folders sync between phone and desktop app is manual and initiated from the phone. It has worked well so far.

Conclusion
In conclusion, I now have my data and quite workable PIM functionality on my Android phone. Business Calendar has a great multi-day view that my old palm didn’t, but it is a bit slower to add new events. The HTC Contacts app gets pictures from Facebook which is pretty cool, and syncs with my gmail so I don’t have to maintain email addresses in two places. It is a bit more clunky to edit and add contacts though. On the memo/note front, B-folders encrypts notes which is cool… but a bit irritating to need to enter your password every time you launch the program. Also it is more clunky to edit notes and does not save your last open note, and position in the note between launches. The rest of the phone functionality is great though and a huge step forward from the Palm. It’s really for a separate post to talk about these, but good web browser, GPS with maps, train timetables, movie times near you, twitter client, etc make it an amazing device and well worth the upgrade.

TRON and NORT

On the ultra-geeky front, I watched the original TRON last night, kindly leant to me by my buddy Doctor Dray. Having never seen it before, but heard a lot about it, I was keen to watch it at last. The core idea of computer programs personified is pretty cool, and the 80s rendering is interesting to watch (looks like stuff we did in computer graphics class at uni!). The plot does stretch belief a bit too thin at times though. To get an idea how far movie tech has come between the 80s and today, check out the original 80s trailer and the new Tron Legacy trailer.

Also, at high school, I and my fellow geeks spent quite a bit of time writing games in C like the TRON light cycle game, cunningly avoiding copyright violation by calling them NORT. After writing the 2 player version, we moved on to writing simple AIs so that you could play against the computer. Recently going through an old computer’s hard disk, I found the code for these. Thanks to the backward comparability features of Windows they still run, although they were written in Borland C/C++ for DOS! Amazing blast from the past.. here’s a picture of AI NORT in action:

Page 4 of 20

Powered by WordPress & Theme by Anders Norén