Category Archives: Tech

Business Cocoa Development Objective C Personal Tech

Introducing: SourceTree

I’m pleased to announce that I’m finally ready to make my first fully-fledged commercial Mac OS X application available to the world!

SourceTree is a user-friendly Mac OS X front-end for Mercurial and Git, the two most popular distributed version control systems used today. The goal was to create a single tool which could deal with both systems efficiently, and to give a developer quick and intuitive access to the things (s)he needs to just get on with building software.

I thought I’d answer a few background questions on this that I get asked on occasion:

Why Mercurial AND Git?

Other apps tend to concentrate on just one version control system, so why am I supporting two? Well, as a developer I’m regularly coming across projects from both sides of the fence, and in practice I find I need to use both fairly regularly. I personally chose Mercurial for my own projects (and discussed why here), but I still use Git when dealing with other projects, and spend a fair amount of time hopping between the two. It struck me that even though they have their differences, they are both based on the same distributed principles, so having to use two separate tools was just unnecessary. I wanted a single tool which provided a common interface where that made sense, while still exposing the things they do differently where that was useful too. SourceTree 1.0 is my first attempt at that.

Why only Mac OS X?

There were actually multiple reasons for this choice:

  1. I wanted to learn Objective-C and Cocoa on a real project
  2. I know from experience that designing for multiple platforms can be a distraction, with more time spent on compatibility issues, and less on functionality – and that’s before you even consider the compromises  you have to make, particularly on UI conventions which are far from uniform across platforms. I’ve been a multi-platform developer for more than 10 years, and for a change I just wanted to focus on the end user results and nothing else. I’m aware that schedules slip very easily when you overcomplicate, and I’m already supporting multiple DVCS systems (something I consider to be an important feature point), so I deliberately chose to keep this element simple.
  3. Mac OS X has become my own platform of choice for most things now. The combination of stability, user-friendliness, Unix underpinnings and well designed hardware match my current needs perfectly. I’m done with the ‘some assembly required’ PCs that I loved tinkering with over the past 15 years

What about Subversion?

A few people have asked me if I plan to add Subversion support too. I actually did intend to originally, until I realised how much time it was going to take to just do a decent job on Mercurial and Git. Within the time constraints, I focussed on the subject areas that I felt I could contribute most to – there are already quite a few Subversion tools out there for Mac OS X, but Mercurial and Git are much less well served, so that’s where I focussed my efforts.

I still have Subversion support tentatively on my work plan, but it’s not top of the list. I think it’s better to do your most important features well before diversifying. Plus, there are problems with Subversion – it’s very, very slow compared to Mercurial and Git, so to match the performance in SourceTree of things like the interactive searches and dynamic refreshing / log population I’d probably have to do a ton of extra caching just so the user wasn’t sat tapping their fingers.

Edit: I made my decision on this: I don’t plan to support local Subversion, but to support operating with Subversion servers with Mercurial and Git locally via hgsvn and git-svn.

Why didn’t you make it open source?

Sorry folks, while I love contributing to open source (I’ve done a bit on SourceTree too, sending a patch back to BWToolkit), making it work as a business is very hard indeed. I half-killed myself trying to combine being an open source project leader and doing other commercial activities at the same time, so now I’m trying a more traditional approach. One thing I learned in the last few years is that there are some sectors & application types where being an open source maintainer is very compatible with also running a business based on that project, and there are others where you can really only do one or the other simultaneously without flaming out. Sucks, but there it is ;)

What’s Next?

I have a public, official roadmap for SourceTree and encourage users to suggest things they think should be on there, via the support system. I learned from running an open source project for 10 years that being open about your plans can be a big benefit – users like to know where things are likely to be going, and often have better ideas than the developer on what could do with a bit more spit and polish. They can also tell you what’s important to them, which is crucial for prioritising – as developers we tend to get carried away with things we want to work on, but in the end, it’s scratching the customer’s itch that matters most.

And while I’m really quite proud of SourceTree 1.0, there are plenty of features I’d like to continue to add, and definitely more room for some totally unnecessary beautification which I didn’t have time for in the first release. Hey, this is OS X ;)

SourceTree is available now on a 21-day trial license. Go get it already :)

Internet Tech Web

Hosting services: my recommendations

After hearing on Twitter how an acquaintance’s new hosting provider went ‘mammaries skyward’ this week, much to their understandable annoyance, it occurred to me that I have some recommendations I can make on this subject. While I don’t host that many sites, I’ve been doing it for long enough and had experience of both personal and medium-traffic sites that I’ve experienced the highs and lows quite a few times already.

The Golden Rule: Support > Everything

When it comes to hosting, the most important thing to look for, beyond what all the statistics of how much space and bandwidth you get, beyond even quoted up-times, is the quality of the support service. The big question is: when things go wrong – and if you host long enough, eventually they will even in the best possible hosting environment – how quickly are problems resolved, and how responsive are the support engineers during the process. Literally nothing is more important than this, and unfortunately it’s the one thing that you’ll only really learn with experience, unless you’re hosting a site big enough that you can get a formal SLA. Assuming you’re not going big enough for that, the only way to judge this is by being with a provider for a while, or knowing someone who has been with them, or possibly looking at online review sites – although frankly these are often highly unreliable, polluted as they are with inaccuracies and omissions either because of ignorance (people who post glowing reviews after being with the site 2 weeks) and unfortunately by frequent shill reviews.

I’ll post a couple of hosts I’ve had good experience with over many years later in the post, through good times and bad.

Know Your Bandwidth

Personally, I instantly rule out any host that claims ‘unlimited bandwith’. This is a crock – there is no such thing, and to claim there is just means the host is already lying to you before you even start – they have to pay for their bandwidth, so they can’t possibly allow everyone truly unlimited bandwidth and stay in business. If you really need unlimited bandwidth, i.e. you have a high-traffic site with lots of media files, then you will quickly bump into the way that these sites offer this ‘unlimited’ deal – via throttling. You may not have an absolute physical cap on your bandwidth, but if the tap is locked off to a slow dribble beyond a certain usage, it’s really worthless. In practice, ‘unlimited bandwidth’ is just a marketing point that they hope will draw in people who will only actually use a tiny amount of bandwidth, but will somehow favour them because the offer looks good. Don’t be one of those dumb guys.

Really you need to establish your bandwidth requirements and head for a host that can fulfil them for a reasonable price. For example, ogre3d.org uses between 125GB and 250GB per month, which is a reasonable amount, compared to my personal site here which only needs 5-10GB per month.

If you have ‘spiky’ bandwidth, i.e. occasionally you need to be able to distribute large amounts of data, but it’s not a constant stream, it would be best to go for a lower monthly limit and host high-bandwidth items elsewhere. I often use Amazon S3 for this purpose which can be made to look like a sub-domain of your own site, and which charges for bandwidth at a very fine granularity so matches your demand closely – it’s more expensive than buying a monthly allowance if you use it a lot, but for on-demand spikes it works very well.

Shared, Dedicated or Virtual/Cloud?

I currently use two shared hosts and one dedicated host, to match the demands of each site. Personally, I’m still very skeptical about virtual private servers and cloud hosting, due to a bad experience I had a few years back when we tried running ogre3d.org on a VPS. We lasted not much more than a month before we moved the server to a dedicated machine because the VPS simply didn’t deliver on its promises – performance was unpredictable and to be honest you had the worst of both worlds – you had to admin your own server but you still didn’t have a 100% guarantee that no-one else would be screwing with something on the machine, or that the disk arrays wouldn’t be hammered by someone else (regardless of CPU assignment), or some other balancing issue. Virtualisation has evolved in the last few years so this may not be an accurate representation anymore, but personally I wouldn’t go for a VPS again any time soon, unless it was a machine I controlled entirely and partitioned myself into virtuals – at least with shared and dedicated servers you know exactly what you’re getting – either a low-maintenance but shared resource environment, or total control & power. VPS claims to offer a middle ground but in my experience it didn’t deliver.

So, who do I use?

For my shared hosting, I’ve been using Hosting Matters for about 10 years now. I went through a couple of other hosts before them and had terrible experiences, but since I switched to them I’ve been very happy. I can count the number of hours downtime (that I’ve been aware of) over those years on one hand, and whenever there’s an issue they’re incredibly fast to respond – they have both community forums and support tickets depending on the urgency. It’s also very reassuring to see the same names cropping up in the support responses over the years.

Their offerings are pretty standard, nothing that would make them jump off the page for anyone looking for a stellar feature list or super-cheap pricing. But they’re very reasonable, they’re honest about what they’re offering (like bandwidth), and as I said before, support > everything.

For dedicated hosting, since 2007 I’ve used Dedipower. They’re based in Reading, their support staff are all local and are on the end of a phone if you need them (no call centres). Having been through a UK dedicated server comparison twice in the last 3 years (once again just recently), Dedipower came out as the most competitive for the service they were willing to offer, and I’ve been happy with the support service. In once instance in fact, when I moved a sub-site off the server, they were quickly on the phone to me within 10 minutes to tell me it was ‘down’ – at which point I had to explain it was expected & apologise for not notifying them in advance. You really can’t complain about that.

I hope that’s useful to someone. In case I need to point this out, I’m not getting paid or receiving discounts to promote either of these hosts, they’re just the two I’ve been most happy with over the ~10 years I’ve been hosting sites. YMMV but they’ve worked well for me :)

Personal Tech

iPad – my first weekend

Apple‘s new flagship product, the iPad, was only just released in countries outside the USA last Friday, and I was fortunate to get my hands on one on launch day. Like many Apple products, this one has divided people, with a lot of people decrying it as a device looking for a purpose, a device that falls between two stools (not as portable as a phone, not as functional as a laptop), a device that is stifled within Apple’s walled garden. Despite there already being a plethora of reviews out there on the internet, I thought I’d give my initial impressions of it after the first weekend.

First, some context

It would be illustrative first of all to set out my reasons for wanting to lay down some cash on a product such as this, in order to frame the context in which I’m evaluating it. Some potentially relevant facts:

  1. I don’t have an iPhone. I work from home, and I consider it extremely impolite to be constantly stabbing away at a phone while in a social gathering (you know who you are), therefore I can’t justify owning one. I have a far cheaper Nokia smartphone which does what I need just fine for the rare occasions when I need to check the internet on the go.
  2. I like Macs. This is an opinion which I’ve come to only in the last few years – despite studying user interface design as a module of my compsci degree, my interest in practical applications of the subject has only recently been piqued, and I’ve learned that Apple definitely groks these things better than most. I’ve also changed – I used to love taking apart my PCs, customising them to the nth degree, knowing every tweaked and tricked out element of it. Now, after 20 years, I find that kind of a bore and generally just want something that works, gets out of the way and lets me get on with what else I want to do, and I find Macs are good at that.
  3. I find touch interfaces very interesting. As an RSI sufferer for the last 7 or so years, I’ve become acutely aware of how terrible mice are as an ergonomic interface. Really, they’re awful – the wrist rotation, the fact your arm has to be right out to the side with most setups, these things are ergonomic suicide. At some point, unless you want chronic carpal tunnel syndrome, you’re going to have to switch to a track ball, a track pad, or one of those vastly overpriced vertically oriented mice. Personally, I try to use the keyboard for most things, which isn’t great but it’s better than the mouse, and a track pad on laptops as much as I can, which are much more natural. The prospect of a renaissance of user interfaces designed not to need a mouse, but to be entirely driven by touch, is something very appealing to me. It doesn’t work at all for sustained use when there are large, immovable vertical screens involved, and it doesn’t work that well when the device is too small – to me the pad form factor is the ideal for this approach.
  4. I watched Star Trek:TNG and lusted after their pads for years (even though they were just fake plastic slabs). Now it’s a reality! Who wouldn’t want that? ;)

Perhaps importantly, going into this I wasn’t one of those people looking to replace another device with the iPad, but I did see it as an opportunity to use an iPad in use cases where I considered the other devices I could already use to be sub-optimal. I’ll cover those use cases later on when I discuss how things turned out in practice. So now, on to my evalution…

Physical characteristics

I’ll try not to cover too much ground that’s already been adequately covered elsewhere. You already know the iPad is fantastically well constructed, beautiful to look at, has a wonderfully bright and sharp screen (which is prone to finger prints) – we’ll take all that as read. In my opinion, it’s not that heavy, but if you had planned to hold it up in front of your face with one hand for a long time, yeah that’s going to get uncomfortable. Personally, I don’t do that – like when I’m reading a book of any size, I rest the iPad on my lap, either flat or just propping it up on edge with one hand, and that’s fine for several hours in my experience. Having said that, the sleek and smooth exterior means you can be afraid of dropping it – however I’m using it in a leather flip-case (I’ll cover that in a future post, it’s a good one), and in this configuration gripping it becomes a total non-issue.

The screen was sharper than I expected, it seems to have approximately the same pixel size as my MacBook Pro, since even though the resolution is lower, the screen is only 9 inches. The default brightness setting was a little dark for me so I tweaked it up to about 75%, which was perfect. It’s a glossy screen which you may have problems with outside, but inside in full daylight (we have many windows in our house) and using regular lighting at night, reflection has not been an issue.

As has been pointed out, there are no cameras. Personally, that’s not something I care about – I don’t use my mobile phone camera either and I have a far superior camera within 10 feet of me in my lounge if I need one. I can imagine a camera for video conferencing might be useful to some people, but I’ve had one in my MacBooks for 3 years and I’ve never used them, so really, this is not important to me.

Connectivity wise, we’re talking minimal – just a dock connector and Wifi. It would have been nice to have a USB slot or two and especially a SD card slot (although you can get an adapter for that), but anyone who’s bought an iPod before knows the Apple way – don’t try to do everything, just try to do the core things better than anyone else. So, did they manage that?

General User Experience

I’d sum it up on one word – ‘butter’. The fact is that the iPad comes with only a small piece of card of instructions (and bizarrely, a 300 page downloadable manual which I don’t think they expect anyone to read), and you don’t even need to read that. Seriously, a monkey could work this thing, and it wouldn’t even need any training. I consider myself a geek still, and some geeks seem to find user friendly experiences offensive, as if it undermines the skills they’ve acquired, but I’m not one of them, and I admire what’s going on here. It’s a very direct, tactile experience that rewards experimentation and exploration, and just says “hey, come play with me, I won’t bite”. This, frankly, is how systems should be designed.

The lack of multitasking (due to be added in OS4 later in the year) is much less of an issue than I expected. Apps remember where you were, and tend to launch fast so switching between, say, Safari and an email that you’re in the middle of writing, works just fine and feels no different to true multitasking. The only thing missing is if you have apps which need to actively do things when you switch – such as IM or voice messaging, or music players other than the built-in iPod features. But honestly, so far I’m not missing it, even though I can imagine it being useful in some cases.

Many have said this is a bigger iPod Touch. They’re right, but in the same way that a bay window is just a bigger porthole. If you think that doesn’t matter, maybe you wouldn’t mind replacing all the windows in your house with portholes. In my opinion, the size of this device is absolutely perfect – for the purposes use it for (see below).

So What’s It Good For?

These are the primary things the iPad is being used for in our house:

  1. Checking mail, web, news, social networks at home in a casual setting. When we’re doing things at home, watching TV, playing non-PC games, having guests around, or just between other things, it’s often useful to quickly check email or look something up on the web. Getting the laptop out takes too much time and it’s too bulky if you have friends around, and a phone is often too cramped, particularly if you want to show the contents to others or you want to type more comfortably. The iPad has instantly become the way my wife and I do all these things when we’re not at the PC anyway, and it works really well. Websites display legibly with no scrolling around, typing is fast (slower than a real keyboard but much faster than a phone). Most importantly, it doesn’t feel like a ‘work’ device and fits into a casual / social setting perfectly. My preferred way to check the detail of the day’s news after breakfast is now on the iPad via the Reuter’s app. YouTube works great on it too, I can catch up with my subscriptions very comfortably this way. As for the lack of Flash – in almost 3 days, I haven’t noticed, and I don’t think my wife even knows that Flash is not available. Maybe it’ll be an issue some time, but not so far.
  2. Touch gaming. I’ve specifically added ‘touch’ there because people trying to play normal games with traditional controls (using virtual joysticks etc) are completely barking up the wrong tree. Games on the iPad, like the iPhone, work best when they’re designed with a touch interface in mind, or at least adapt well to it (e.g. Plants vs Zombies). Flight Control and Harbormaster are good examples of this, where there’s just no way you could implement a game like this efficiently with anything other than touch controls, and they click in 2 seconds flat. To be honest, my wife has the most experience of the games so far, but the fact that it’s hard to get her off them is a fairly solid endorsement of the gaming capabilities of the device ;)
  3. Documentation. I don’t think I’d use a device like this for casual reading. A paperback is more rugged, cheap, and appropriate in the majority of cases than even dedicated devices like the Kindle, IMO. However, I do think e-readers are perfect for reference documentation, the kind of stuff you need to access randomly, search and dip into at a moment’s notice, often over several volumes, and for that I’m using GoodReader. Of course, if you’d be using that documentation at a PC anyway, you don’t need an e-reader. However, if you’re not at a PC, and you need access to this kind of information, then e-readers suddenly become useful. Because I have a minority usage for this, a dedicated e-reader has never been a worthwhile purchase for me, but as one feature in a multi-function device – that’s useful. In particular, I run a D&D campaign one night a week, and thus far have always needed a big stack of books next to me, which is a pain for space when we have a full crowd in. I’ve tried using laptops before, but they suck – they take up too much room and if you have them in a comfortable position in front of you they’re just too distracting. The iPad is the perfect size, and replaces several physical tomes with fully bookmarked, searchable texts, and it can sit to the side of me, available but not obtrusive, large enough to read but not too large to dominate the space. It’s by far the most practical device I’ve ever come across for this purpose, and I can’t help but think others will find places where it’s useful too.
  4. Photos. This might seem odd to call out as its own bullet point, but actually I think this is significant. Since we transitioned to digital photos, it’s made sharing them with family more awkward. Sure, you can use Facebook, but firstly – shockingly – many friends / family members don’t use Facebook (and no, I’m not going to pressure them to use it), and secondly it’s actually nice to show photos to people in person, and, you know, talk about them. Interact. Face to face. Radical stuff I know, but Facebook doesn’t solve that problem. In the past we’ve taken a laptop to other people’s houses, but that just feels clunky and geeky. And we don’t want to get them all printed, because that’s just a massive waste. And digital photo frames of any decent size are too expensive to justify. Enter the iPad – which can double as a photo frame and is very good at being a medium to share photos in person, just because its form factor works well – it’s easy to pass around or look at from multiple directions (rather than everyone crowding around a laptop screen). It’s a digital photo viewer that works in a multi-person environment, and the display is still large enough to do them justice.
  5. Sketching. It may not be a match for my wife’s Wacom tablet, but as a casual sketching tool (via SketchBook Pro) it works quite well. Obviously the touch interface is a no-brainer for this – it’s missing things like pressure sensitivity and angle detection like the Wacom kit does, but even so it’s far more natural than drawing with a mouse, and considerably cheaper to try out than buying a full featured tablet.

The important thing is that none of these things could be done as well with devices of another form factor, IMO. You could do them, but you’d be compromising something – such as screen space, comfort, instant accessibility. I think a pad form factor hits a sweet spot for these things, and that Apple’s implementation is confident and slick. In the end, that’s all I really wanted.

Conclusion

My personal opinion is that there definitely is room for a device of this form factor in the lives of many people. You can argue that iPad version 2 will have more features, or that an Android tablet (whenever they arrive in a product form) will do better later on and will have more apps because of the open architecture, but  I think the phrase ‘a bird in the hand is worth 2 in the bush’ is relevant here. In technology, there’s always going to be something better in the future, that’s a universal constant at any point in time, for any product. Right now, the iPad pushes exactly the buttons I wanted it to push – that doesn’t mean there isn’t potential for more, but what it does do, it does extremely well. And more importantly, it does it right now, not at some theoretical point in the future. That has value to me.

Perhaps the best illustration is that the iPad has been in pretty much constant use since purchase, barring when it’s on charge, and so far the split has been 70/30 in favour of my wife, whose review comments are simple: “It’s cool”. I concur.

Tech Windows

Windows 7 switcharoo

Spring is usually a time of change, and I finally got a gap in my schedule where I could wipe my primary Windows machine and install Windows 7 (64-bit). It’s had XP on it for years – my experience with Vista on secondary test machines quashed any desire to ‘upgrade’ my primary work environment, and despite owning Windows 7 for some months a number of things have stopped me installing it, from lack of driver support for my office wifi-connected all-in-one printer / scanner, to work commitments where I couldn’t afford to take the time out to reinstall and set up several complex environments.

So, as someone who hated Vista, what are my impressions? It’s actually pretty good. It’s not perfect, but it’s definitely better than what came before, which is exactly what I expect from an upgrade (and exactly what Vista didn’t deliver). Things I like:

  • Taskbar – clearly inspired by the OS X Dock, but adds some features of its own too like the jump lists and window previews. I’d prefer if clicking the button when there are multiple windows open switched to the last one you had been using, instead of forcing you to pick one, but on the whole it works well.
  • Windows Update gets out of my face – Vista’s Windows Update was a dog, while it was kicking in it sucked resources like a bastard and threw off any performance testing I was doing, sometimes for long periods (made worse perhaps by the fact that I didn’t use Vista that much). The new one seems much lighter.
  • Responsive – they say they’ve improved the parallelism in many systems, and it certainly feels like it. The same machine feels faster on Windows 7, compared to feeling slower in Vista.
  • Libraries – these are like customising your sidebar in OS X’s finder, but add more features like grouping and collection searching. Nice.
  • Devices & Printers – when I looked at this view and saw that it came up with photos of my exact mouse, printer etc, without any specific drivers etc installed apart from the base system, I thought that was pretty cool. It’s actually a useful view in practice too, but the pictures made me grin, because I’m shallow. But then, if you don’t smile at least once when you use an OS for the first time, something is wrong on the usability side.

There are some stupid things though:

  • UAC remains dumb – it still text-matches filenames like ‘patch.exe’ and arbitrarily decides that they need to be admin-level. Sure you can tack a manifest onto it to tell it not to, but for Christ sakes, talk about a blunt instrument.
  • Startup items – Why is changing what apps load at login still so esoteric? It hasn’t changed since Windows 95, and they’ve hidden the Startup menu by default now (and services are even harder to find). Just not user-friendly at all – compare to OS X where Login Items is very simple for anyone to use.
  • Network Drive Login Scope is obscure - This is new in Windows 7 – if you connect to a NAS or other network drive, enter your login and click the ‘Remember’ checkbox, it only actually remembers until you log out, not permanently as in previous versions. To change this is very obtuse and user-unfriendly – you have to open Credentials Manager, delete the existing credential (because editing the scope is not possible for no particular reason), and re-create it with the same details (again, scope is not an explicit option so you just have to go on faith here). By doing this the scope becomes ‘Enterprise’ rather than ‘Session’ (which is obscure in itself) and the result is that your credentials will be remembered across logout / reboots like in XP / Vista. It took me some forum browsing to figure this out, and it’s just not an intuitive design. Adding a ‘scope’ combo to the remember option that says ‘Until logout’ or ‘Forever’ would solve it, but no, that would just be too simple & intuitive.
  • Aero Shake is just silly

But, the bottom line is that on balance it’s pleasant to use despite a few oddities, and I’m happy with Windows 7 as my main Windows OS now in a way I never was with Vista. I still find OS X more pleasant to use, but this is the closest Windows has ever come to it, and it adds a few ideas of its own too, that importantly actually work & add value – compared to Vista that mostly imitated and whose additions just fell flat (Flip3D, I’m looking at you). So, a good OS, and the first one from MS since 2001 that I don’t regret spending money on.

Tech

The future for tech is fragmentation

Our rampantly consumerist world has many facets, pros and cons to it, but one thing has so far been perceived as a universal constant – the quest for the ‘next big thing’. That one product, or class of product, that every man, woman and small furry creature from Alpha Centauri wants to get their hands on. In the technology world, analysts have long since been riding the gravy train of purporting to be able to peer sagely into this murky future in order to extract those world-changing gems that everyone would be invested in.

I don’t think this is the case anymore, at least not to the extent that one product or product category can be seen as ‘the future’. In much the same way as in days of yore there were only a small handful of TV channels that everyone watched, compared to now where everyone has a million channels plus the Internet to cater for their media absorption needs, I think the technology needs of the general public can no longer be stuffed into one universal pigeon-hole as has happened in the past.

Up until recently, everyone felt they needed a PC of some sort, and most of them also felt they needed to use Windows because that’s what they used at work and also what most PC vendors pre-loaded their machines with. Cue a huge homogenous mass of people on the same technology – exactly the sort of ‘next big thing’ that has been common over the last 20 years.

But, as computing resources get smaller and more connected, people are realising that they don’t necessarily need all the things a PC can do, all the time. Things we would traditionally consider to be ‘computing devices’ are fragmenting along functional lines in much the same way as household devices have always been this way – your toaster, your microwave, your TV. The idea that there needs to be a single device that has the ability to do everything is something that increasingly only technophiles will hold on to, because specialism almost always means an improvement in the user experience for the given functional subset. Think about it – sure, you could build a device for the home that let you wash dishes, watch TV and toast bread all in one package – but why would you? The toaster works well doing what it does, taking up only the space it needs to, while the dishwasher and TV are specialised for their tasks too. Plus, you can use them simultaneously for different things. Sure, household devices have a common medium – plumbing, electricity etc – but they’re inherently separate, and all the better for it.

Computing devices are really no different. Over the last 20 years we’ve been conditioned to expect that we all need a common beige box that does everything anyone could possibly need, but in fact our lifestyles don’t agree with that at all. The common medium of the Internet is ubiquitous, but apart from that we all want to do different things with technology, and even within our own lives we need different things in different circumstances. When I want to do intensive tasks like writing code or editing video, I want a full keyboard, a hi-res screen and a lot of processing power. But when I’m just checking my email on the go, I want something small, portable with a good battery life, and I’m willing to sacrifice powerful CPUs, large screens, and full keyboards for that. If I’m on the sofa and want to check a website or read an article, a pad-style device would work best for that – bigger than a phone, but more casual and form-friendly than a laptop. Even in the context of a single user, our lives are not geared to single devices that do everything, and in fact there are hard limits that prevent any one device, even the best smartphone in the world, from fulfilling this – even if you could shoehorn the power into a smartphone, you’ll never replicate the full keyboard or screen short of things seen only in Inspector Gadget. So it’s not at all surprising that now that technology is allowing these devices to morph into more functionally specific roles, people are snapping them up – much to the horror of people that have a vested interest in the PC being the future of everything of course.

Cloud computing may be the only fashionable technology that cuts across all of this (hence why everyone is so scared of Google), but even then, people (particularly businesses) are just not ready to give away control of all their data just yet, and roaming data charges – since you’re not always near a free Wifi spot – are still nowhere near the place they need to be at for people to be able to rely on non-local storage entirely. So again, cloud computing is going to co-exist in the overall technology soup with everything else.

I think the next few years are going to be more interesting in technology than the last 20 have been by quite a long way, simply because of the way it’s going to blend into our lives better. Standard office bureaucracies may well be locked into the standard Microsoft PC / Server model for quite a while yet, and power users (me included) are still going to be buying PCs and laptops in addition to specific devices, but outside of that, things are set for major change in multiple directions. I like that – technology, like fashion, should be a personal choice, tailored to your lifestyle, varied and multidimensional depending on your frequently changing environment and needs. The one-size-fits-all model is dead, and I don’t think many outside of Redmond will mourn its passing.

Internet Tech Web

Who cares what’s trending?

Trends – or as I would call them, rampant fads populated by people looking to leverage the best buzzwords to get VCs to throw money at them – come and go. The one constant is the claim that <insert trend here> is so awesome that will universally and irreversibly replace <insert existing technology here>, to the extent that if you’re using or producing <insert existing technology here>, you are irretrievably lame, and complete strangers will point at you in the street and laugh at your horribly backward ways.

The fact is though, the best that today’s trends can aspire to is to become the existing proven technology that tomorrow’s trends will point and laugh at. That’s if they do well – most will simply evapourate and leave the world as if they never were. It’s rather beautiful in its own way, a sort of karmic circle where the unjustified elitism associated with being part of the ‘hip’ crowd is eventually cruelly punished by the derision of those who replace them.

The current trending darling is cloud computing, following in the wake of the dot com boom, the social networking explosion, and yes even open source . Let’s face it, there are quite a lot of people and companies who participated in open source not because of the fundamentals, but because for a while including open source on your corporate manifesto was a  damn good way to get funding. Now that open source is no longer a leading trend that you can sell to VCs (it’s graduated to ‘mature’ and has therefore lost its sparkle to a certain breed of person), the piranhas have swum elsewhere. Good riddance, I say.

Trends are like the Borg – they’re not happy to be just a part of a diverse technical melting pot, they have to be front-and-centre in everything, and want everyone else to be defined in   terms of themselves. So predictably, now we’re told that everything will eventually run in the cloud, and that the browser will be our only OS, and every company chasing funding right now is trying to shoehorn some cloud aspect into their corporate plans. What a load of old rubbish – while I fully expect cloud computing to be one of the ‘stayers’, just like open source, it’s only going to be a part of the whole. I fully expect us to make far more use of hosted & distributed capabilities in the future, but I know for a fact that dedicated platforms are never going to go away – they’ll simply blend.

I could make all kinds of detailed arguments as to why browser based servicing of all needs is not a panacea, but there is one fundamental  issue that is most important - generalised tools and grand unified visions always fail, even when they make perfect sense to a designer or ‘visionary’.

Unified visions and perfect generalised solutions only exist in the head  of one person, usually a designer who has ‘seen the future’ and realises that with some adaptation, he can express all things in terms of the model he has in his head, just with some funky parameterisation. Eureka!

But, regular people don’t want generalisation or unification, only designers do. You’ll generally get a good response from developers, technicians and sometimes ‘extreme power users’  if  you pitch highly adaptable generalised toolsets to them (open source anyone?), because they are adapters and creators, but try to package that approach into an end product for the masses and it just won’t work. At the sharp end, all that matters is that a piece of tech does the one or two main things that it’s designed for, really, really well, and everything else is irrelevant – Apple figured this out years ago, and it’s why the iPod crushed its arguably more fully featured competitors. Generalisation is just not a feature regular people want – quite the opposite, they want specialisation.

The idea that in future all things will be done through a general browser to the cloud is a designer’s vision that will never happen. In the same way that the general public is moving away from using a single PC to do everything, and instead likes to use devices that better reflect the use context and purpose (but to have them all connect together), the vision of a unified application (browser) that can do everything is similarly flawed. The iPhone allegedy was originally conceived to use its browser for everything, but in practice most people preferred to use dedicated apps for each purpose (that could talk to the internet anyway) because they’re more functional.

So, who cares about trends anyway?

Development OGRE Tech

Building a new technical documentation tool chain

Writing good documentation is hard. While I happen to think that API references generated from source code can be extremely useful, they’re only part of the story, and eventually everyone needs to write something more substantial for their software. You can get away with writing HTML directly, and separately using a word processor to write PDFs for so long, but eventually you need a proper tool chain with the following characteristics:

  • Lets the author concentrate on content rather than style
  • Generates multiple formats from one source (HTML, PDF, man pages, HTML Help etc)
  • Does all the tedious work for you such as TOCs, cross-references, source code highlighting, footnotes
  • Is friendly to source control systems & diffs in general
  • Standard enough that you could submit the content to a publisher if you wanted to
  • Preferably cross-platform, standards-based and not oriented to any particular language or technology

When I came to write the OGRE manual many, many years ago, I went with Texinfo – it seemed a good idea at the time, and ticked most of the boxes above. The syntax is often a bit esoteric, and the tools used to generate output frequently a bit flaky (texi2html has caused me many headaches over the years thanks to  poorly documented breaking changes), but it worked most of the time.

I’ve been meaning to replace this tool chain with something else for new projects for a while, and DocBook sprung to mind since it’s the ‘new standard’ for technical documentation. It’s quite popular with open source projects now and it’s the preferred format for many publishers such as O’Reilly. In the short term, I want to write some developer instructions for OGRE for our future Mercurial setup, but in the long term, I’d really like a good documentation tool chain for all sorts of other purposes, and Texinfo feels increasingly unsatisfactory these days.

Having spent some time this week establishing a new working tool chain, and encountering & resolving a number of issues along the way, I thought I’d share my setup with you.

read more »

Personal Tech

iPad first impressions

ipadYesterday saw world-plus-dog in the technology sector glued to Apple‘s announcement of their new tablet device, which has now been officially dubbed the iPad. Basically, when you boil it down it’s a super-sized iPod Touch with optional 3G support and a few more apps.

Reaction has ranged, as usual, from the ecstatic “I’ve seen the face of God, and his name is Steve”, to “What a useless piece of junk”, stopping at most points in between. In the more negative camp, lots of talk has centred around what it doesn’t have (multitasking, a camera, a USB port, Flash), and that some people seem to find it hard to grasp the usage conditions of a device that neither fits in your pocket, nor does everything a laptop does.

Personally, I’m cautiously optimistic. The device was never supposed to be a phone or a laptop, so I’m curious why people are comparing it to one – the point is that it’s something else. I can actually think of multiple use cases where a device of this form factor and capability would be useful to me. Here are a few examples:

  1. I’ve thought about buying an eReader before, but have always been completely unsatisfied with the existing solutions: current e-ink devices are fine for reading black and white novels, but don’t handle A4 formatted content at all well, can’t do colour, take far too long to flip through pages, and are basically unusable for keyboard input, making searching impractical – and therefore these devices do not satisfy my need for a reader that replaces my bookshelf (physical and virtual) of reference material at all. The iPad, however, looks like it would be able to do that much better.
  2. Sometimes I’m in the living room or kitchen and I’d just like to look something up on the web; maybe check some news or look up a recipe maybe – just a 5-10 minute thing. Firing up the laptop just for this is overkill, but the pages are too small to really read properly on a phone. In the end I do one of these things anyway but it’s never ideal. Again a tablet form factor would be perfect for this.
  3. When we’re showing photos to family and friends, these days we do it on a laptop because we never print anything. It’s not ideal, even the most elegantly built laptop requires everyone to crowd around the screen behind you or similar – it’s awkward. If I had a tablet to do it, one I can easily hold up and pass around, that would work much better.
  4. When I’m in a social situation when it would be useful to have intermittent access to some documents or other information that’s too big to fit on a phone screen comfortably, currently you need a laptop to do it. Laptops are really, really unsociable to have out on a table with others around (say at a meeting), because of the way they need to be used, with a screen forming a psychological barrier between you and whoever else is on the opposite side of the table. This happens all over the place: I strongly feel that laptops are the scourge of coffee shops today, turning a social space into a cluster of virtual mini-cubicles with individuals hunched behind screens not talking to anyone. I also play pen-and-paper RPGs socially, and over the years I’ve tried to use a laptop with many highly useful applications as an accessory, and it’s never, ever worked. Even the smaller laptops are too obtrusive, but a phone is just too small to be useful. I’d love to try using an iPad with some dedicated apps for tracking things.

I’m sure there are other examples. Basically I think people need to get over the fact that it doesn’t improve on what they currently use their phone or laptop for – that’s really not the point. I see the iPad as a ‘gap filler’ – and I can certainly see some gaps for it to fill in my life.

The price is much better than expected too, mostly because it’s an upgrade of an iPod rather than a downgrade of a laptop. I’d skip the 3G option because it’s pointless for me, I’d only use it on wifi, so that makes it not that much more expensive than a top-end iPod Touch.

But, it’s not all roses. The lack of Flash is an issue for web compatibility, although at least video through HTML5 is starting to happen (YouTube added it recently). The lack of multitasking is a bit disappointing, but might be relaxed in an OS update later. The GPU capabilities are a bit unexplored online so far, it seems that it’s probably as powerful as an iPhone 3G, but falling short of the 3GS (so GLES 1.1). I’ve also heard today that iBooks might not be available in non-US countries at launch, which definitely undermines the offering as an eReader.

So, depending on the practicalities when it’s released over here, I may or may not grab one. I can definitely see places in my life where a not-a-phone-or-laptop device would be useful, and frankly, I’m intrigued by the possibilities of where this kind of device may go in future.

Personal Random Tech

Cheap, simple gadget satisfaction

Like most members of the male species, and particularly the geekier types, I love gadgets. Complex ones are great, but sometimes the greatest satisfaction can come from simple things that just work really well. Here’s a couple of recent buys for me that fall into this category that I thought I’d share.

Joby Gorillapod

gorillapodWhen we’re on holiday I often spend time trying to find places to put the camera so we can do a timer shot with us both in the picture, and when you’re in forests and up mountains finding a level spot is tough. I’ve gotten quite good at it, squinting at rocky outcrops and tree stumps with an almost film director level of interest, but it’s still awkward and sometimes precarious; this year in the Canadian Rockies I placed the camera on a rocky slope and only realised when I had to charge down again how many rocks were between me and the ‘mark’ I had to be at within 10 seconds, and I almost came a cropper, much to the displeasure of my wife.

I’d seen the Gorillapod before but kept forgetting to buy one before we went on holiday, so this time I bought one as soon as I thought about it, even if it’ll be sitting around unused for a while. Basically it’s just a small tripod made from a series of ball joints, each one perfectly stiff under the weight of a camera but easy enough to move, and with rubber surrounds on every joint and on the ends for grip. It’s very bendy and yet very sturdy once it’s set, so you can use it as a regular mini-tripod (but can adjust for uneven surfaces really easily), or you can suspend it from tree branches and poles, secure it up on top of fences or bollards just by bracing it, and all kinds of things. It just clips on to a small tripod mount and folds up really small.

It’s just an incredibly useful little gadget that I wish I’d had for holidays ages ago, and I imagine regular photographers would find it invaluable too.

Bicycle iPod Mounts (for drum kits)

ipod_mountI don’t ride a bike anymore, but after setting up my drum kit I realised I needed somewhere to mount my iPod if I was going to hook it up for practice, rather than having it on the floor or using gaffer tape or something. Surprisingly there didn’t appear to be any standard accessories to do this (a bit of an oversight on Roland’s part I think since this must be a common requirement), so I was nosing around in the VDrums forum and discovered that most people were just using regular old bicycle mountings, and attaching them to one of the cymbal riser arms (since they’re about the same diameter as bicycle handlebars, compared to the main drum frame which is much thicker).

They were cheap so I gave it a try, and sure enough it works beautifully – you wouldn’t know that the mounting wasn’t made entirely for this specific purpose in fact. Score one for the community :)

Business Open Source Tech

It’s all about the middle ground

I always find Matt Asay’s blog an interesting read – even if I don’t always agree with him, his posts on open source are always thought provoking. Today he was talking about how Wikipedia’s contribution rate is falling and how that has parallels in open source; that the community is no replacement for a centralised, focussed team.

He’s right on the core point – at the heart of every successful open source project there’s always a core team (or individual), and in the really influential ones, that team is usually funded – Mozilla is famously bankrolled almost entirely by Google, the Apache foundation has many, many sponsors including Google, Yahoo and Microsoft, Eclipse has IBM, and so on. Many of the big projects that don’t have more general sponsorship still have a core team funded by a dual-license or other premium software model: MySQL, RedHat/JBoss, Qt etc. Such guidance & direction at the core is crucial – at OGRE we have a core team too, except that we’re not directly funded by anyone in terms of developer time (we have several generous sponsors who cover the majority of our hosting needs); we guide it because we want to, and because we use OGRE ourselves too. My company is probably the closest thing to a core development sponsor, in that I’ll allocate “work time” to doing OGRE development that could otherwise be spent making commercial products or doing consultancy, but it’s by necessity small beer compared to the likes of Mozilla and Apache.

But I do think he underplays the changes that have taken place in the software development world. He asserts that because most headline software development is still focussed at big influential companies, we’ve mostly just rearranged the chairs a bit at the same banquet. I don’t agree with that at all – by nature it still makes most sense to concentrate much of the development in a small team for quality, consistency and organisational purposes, but the point is that where precisely this centre is determined primarily by merit, not by the boundaries of a company’s org chart. While the core team is doing a good job, and accepting reasonable patches and such, people are happy for the show to be run there. The community is still definitely involved in the development, and certainly adds considerably to the end result. Yes, proportionately the central team does more, but crucially, should anything go badly wrong – such as the core going in a direction a lot of people don’t like, or the product being sidelined, if there’s enough of a community a fork will emerge, with another core team to lead it. That’s a critical safety valve that keeps companies more “honest” than they had to be in the past, and is a vital insurance policy for anyone investing their own resources in a piece of software. Matt claims the ‘Command and Control’ setup of software vendors is still in place; I think his view is clouded by the fact that he’s solely focussed on enterprise software, and enterprise customers move at such a glacial pace that any change is largely imperceptible – to the extent that ‘community’ maybe does look a lot like the ‘customers / partners’ relationship of old. But that would be a bad call, completely ignoring the difference in the level of control that is ceded to a community versus the customers of old – sure, many enterprise customers may not wish to leverage that control, and would take a long time to move if someone else chose to do so, but that option is still always there. And not everyone in the world is an enterprise customer – the enterprise usually follows the grass roots eventually.

In practice, it’s really all about balance, the middle ground. Yes, we still need focii of development just to make sure things get done in a reasonable fashion – no-one likes chaos in their software. Yes, it makes most to have that focus funded, in a traditional company model, if that piece of software gets beyond a certain size / popularity. But that doesn’t for a second undermine the value of community participation; in fact the two are deeply interdependent – one without the other is just not sustainable in a sizeable project.

So, people certainly shouldn’t be deluded into thinking that random crowds of people on the internet will create great software without some organisation (the infinite monkeys creating Shakespeare fallacy), but they also shouldn’t think that community is disposable and that we’re in the same situation we were before but with a different label. Nothing could be further from the truth.