Australia’s NBN (National Broadband Network) has just had its first point-of-presence officially opened in Tasmania, with the chance that the whole NBN visionary infrastructure project might be canned by the Coalition (Liberal-National) parties if they assume power after next Saturday’s federal election.
There have been many threads or streams of discussion about the politics, business case and technologies of the NBN. One of the various threads argued by the politicians has been about the link speeds involved: Why we need a uniform 100 Mbps network (and in the last couple of days, the fact that tests have shown the NBN will actually support 1 Gbps, which is no surprise to anybody up with technology but caused appalling incredulity on the Coalition leader).
Anyway, a little to my surprise,The Australian newspaper had this article by Sturat Kennedy the other day: Ultra-fast broadband will be slow on overseas links which will be news only to the naive, but I suppose for their sake is worth an airing.
Then across at the NBN Australia group at LinkedIn there was this commentary on the above article. This article won’t be publicly accessible, I guess, so I thought that I should repeat my contribution to this NBN group discussion below, for public view.
- - - - - - - - - -
I’ve been closely following IT technical issues, such as systems performance and communications technology developments, with all their twists and turns, for decades during my career at IBM and afterwards as an independent consultant.
It's been well understood for decades that the overall performance of a service (such as accessing a web site) is dependent on the individual performance of each of the sometimes many steps that contribute to that service.
In this case, the performance of the local device (desktop PC, laptop, smartphone, or whatever), the various communications links (wired, wireless, satellite), and the remote-end service (usually a web server). Understanding and correcting or tuning performance can be a very complicated art.
Queuing theory can be applied to the individual steps an you can come up with a good estimate of the overall performance behavior as well as that of each individual step (each "link" in the chain). There's a queue at each step, and the overall total service time can be estimated by summing the service times of all the individual steps.
Behavior of queues at times can be rather strange and unexpected, such as on a freeway at certain busy times of the day -- see a fascinating "shockwave" queuing example at http://www.newscientist.com/article/dn13402-shockwave-traffic-jam-recreated-for-first-time.html (and be sure to watch the video).
It's been well understood for decades that the overall performance of a service (such as accessing a web site) is dependent on the individual performance of each of the sometimes many steps that contribute to that service.
In this case, the performance of the local device (desktop PC, laptop, smartphone, or whatever), the various communications links (wired local wireless, satellite), and the remote-end service (typically a web server). Understand and fixing/tuning performance can be a very complicated art. Queuing theory can be applied to the individual steps an you can come up with a good estimate of the overall performance behavior as well as that of each individual step (each "link" in the chain).
For a specific session (such as connection to a local or overseas newspaper web site) it's not much use having one very fast link unless the other links are similar in performance, so a balanced series of steps is optimal for both performance and cost reasons. Or, looking at it in reverse, it's not much point having lots of fast and expensive links if even one link is much slower (and some local country sites can be poor performers, it's not just overseas links and web sties that are slow).
Coalition leader Tony Abbot says that Labor's NBN is like getting a Ferrari when a GM Holden Commodore is all that's needed and affordable.
This affordability argument is oversimplistic. A better analogy would be a fleet of ambulances that, suppose, is measured over a period (weeks or months) to be running at an average speed of 55 Km.hour. It'd be a false economy to purchase a fleet specifying that ambulances only need a top speed capability of, say, 70 Km/hour since much higher speeds are often needed.
Russell Yardley [a respondent in the LinkedIn group’s discussion] mentioned other reasons for having high speed, such as for cloud services. Consider one such service, off-site backup. You might have tens of Gigabytes to be backed up regularly, perhaps daily. What broadband characteristics make this feasible?
Currently most Australian ISPs don't offer high enough upload speeds for such backup to be realistic even if carried out entirely with the country, much less across the Pacific where there are many providers of such a service. Very high speeds for upload -- as well as download, for the recovery phase -- are essential, and it's here that the Holden Commodore analogy breaks down. (Not to mention that Australian broadband plans all count traffic, making such a backup/recovery service unjustifiable in terms of cost as well as performance.)
I could go on and on ...
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.