Jun 21, 2012
Is now the time to take a step back and re-consider not just the 700 MHz band plan, but everything from 450 to 960 MHz?
There is an old folk take of a tourist on a driving holiday in Ireland, stopping a farmer and asking him how to get to the nearby town. His answer is, "If I were you, I wouldn't start here". The same may be true for the regulatory journey that policy over UHF spectrum is now on. Would we really continue our journey from where we are today, or would we, could we, start again?
The recent moves by African and Middle Eastern countries at WRC-12 to release spectrum at 700 MHz for mobile broadband services in Region 1 (EU/ME/Africa) seems to me to be a perfect example. The issues which these countries were trying to overcome is that the 800 MHz 'digital dividend' that the European countries have instigated is in use in many non-EU countries for CDMA networks, working in the different 800 MHz band as is usually used in Region 2 (the Americas). This Region 2 band is largely incompatible with the European digital dividend, and to some extent with the original GSM 900 MHz band and thus a more suitable alternative has been proposed. The problem now, is that the 700 MHz digital dividend band plan, as proposed by Region 3 (Asia) is not wholly compatible with the 800 MHz Region 1 digital dividend. And talk has begun of the possibility of a 600 MHz 'third' digital dividend at some future date.
The situation (in spectrum terms) is shown in the diagram below.
The above does not even begin to consider the wider international harmonisation problems occurring from the fact that North America has adopted a different 700 MHz band plan to most other countries. This lack of harmonisation, together with the proliferation of bands, is a headache for mobile manufacturers who have to try and cram all of them into handsets in order to produce 'world phones' with the widest possible market, and largest economies of scale.
I the words of my Irish farmer friend, "I wouldn't start here", and I believe he is right. At the recent EU Spectrum Management conference, Richard Marsden of NERA suggested that, "perhaps the time has come to consider aligning mobile bands across all three ITU regions". Further, I would argue that it makes sense if the use of all UHF frequencies from 450 to 960 MHz were also harmonised internationally.
Where would I start? I'd consider where we are most likely to be in say 15 to 20 years time. In this time-frame, broadcasting will be almost exclusively HD, 3D and maybe even UHDTV will be with us meaning that terrestrial broadcasting networks will not be able to support the multitude and variety of television channels that will exist. The terrestrial broadcast network may still be a suitable means of delivery of content in some (smaller) countries with fewer channels, but in general it will be replaced with cable, IPTV over fibre, and satellite, all of which are capable of delivering higher bandwidth services.
Public safety users will be on networks that use LTE (or whatever comes after it) and will be using a combination of commercial networks and their own networks to provide resilience and continuity in times of emergency or disaster. These 'blue light' networks may have bespoke spectrum, but may share this with commercial networks much of the time, using licensed spectrum access or some similar approach to ensure that they can gain access at a moments notice when needed. Similarly, business radio users will have migrated to digital networks such as LTE and won't need dedicated spectrum.
Short range devices will all be 'cognitive' using geo-location databases to select unused frequencies and will be designed to be able to sit inside and within all manner of other users (including wireless broadband) and will share spectrum with pretty much anyone.
What will the band plan look like? If we continue the way we are going today, it will look something like this Lego house that my son, Jamie, and I built to help illustrate the situation. The bands from 450 to 960 MHz will have been lashed together in a very haphazard way to produce something akin to a contiguous block. Like our construction here, at a distance it will look like a house, but one built piecemeal with extensions being added willy-nilly as new spectrum bands become available. It will look house shaped, but on closer inspection it will be clear to see that it is just be a series of blocks randomly joined together to form the impression of a house.
Instead, we need to be laying down some firm foundations. Rather than slowly adding pieces of spectrum one by one, how's about architecting a plan for the future, and when new pieces of spectrum become available, using them in a way that will eventually lead to a properly integrated and harmonised band plan. This would require enormous regulatory upheaval both at the regional and international level but if it's not done now, we risk a fragmented, inefficient, future.
It needn't be complicated. The diagram below shows the basic tessellating structure for each adjacent band.
A fixed amount, say 40 MHz, is assigned for uplink. A 5 MHz guard band is left, then another 40 MHz for downlink. The next band has the same proportions but is reversed in terms of up and downlink so that the two bands can butt up against each other without causing interference. This pattern could be repeated exactly 6 times between 450 and 960 MHz giving 240 MHz of uplink, 240 MHz of downlink and 30 MHz of guard band (which might be useful for PMSE and some of the low power devices). At present, with a bit of reorganisation, we could easily occupy the top 3 'floors' of this re-designed spectrum house: 705-745//750-790 MHz, 790-830//835-875 MHz and 875-915//920-960 MHz. Some countries could even perhaps open the ground floor, 450-490//495-535 MHz. Then, if and when terrestrial television is removed from the UHF band, we could re-decorate floors 1 and 2 as well!
This might seem a little fanciful, but if such a grandiose plan as this is not agreed upon some time soon, we are going to be left with a real mess. I leave the last word on this to my son who, upon seeing the house, said, "I don't think people would live in a house like that, it's a haunted house where ghosts would live!"
Aug 10, 2011
A new white paper proposes a means to allow every user in a network to use all the spectrum simultaneously and fully. Does this exceed the Shannon Law? And what are the implications for wireless networks?
A recent white paper by Steve Perlman and Antonio Forenza of Rearden (http://www.rearden.com/DIDO/DIDO_White_Paper_110727.pdf) discusses a technique they call 'Distributed Input, Distributed Output' or DIDO. In essence, DIDO is a form of MIMO, but which differs from MIMO in two specific ways:
- Firstly, the 'antennas' on the network side are intended to be placed anywhere (and everywhere), so could, for example, be replacements for your home WiFi hubs as easily as they could be positioned on towers.
- Secondly, the signal processing necessary to determine the waveform transmitted from each antenna is done centrally, rather than on a base-station by base-station basis.
The logic seems to be that, by using a central processing unit to do the complex MIMO calculations, the cost of the base stations can be reduced (because they become 'dumb transceivers') and further that the user equipment does not need any inherent signal processing at all, as it is all done in the network. Though the white paper is a little nebulous in some areas (there is no talk of how the connection from users to the network is managed, nor of how mobility is dealt with), the principles seem reasonable.
It is claimed that DIDO provides connectivity that exceeds the limits set by the Shannon Law (the law which determines the theoretical maximum amount of data that can be transmitted over any piece of radio spectrum), because all the spectrum can be re-used, all the time, for every user. But measured from the perspective of any user in the network, this is not the case. Each user's connection is still bound by the restrictions enumerated in the law.
As an analogy, the same could be said for two fixed links whose paths cross. Each may achieve a service which approaches the theoretical maximum connections speed achievable, but at the point the paths overlap, the same spectrum is being used twice. So at that point, surely the spectrum is carrying twice as much data as the theory states. This is not the case, as it is the channel capacity which is bound by the law, not the capacity of the spectrum itself.
So does DIDO really do anything new? Well the idea of distributed antennas for a MIMO network is interesting, however the paper alludes to the fact that for each new user on the network, a new antenna is required. Whether the computing power required to calculate the waveforms for the sheer number of sites that would be required is feasible is a bit of an unknown. But the more interesting point hidden in the text is the need for an antenna for every user. It 'feels' reasonable that if every user on a network had an antenna dedicated to them, then the service that they would receive would be excellent.
But what of the cost? Is it at all economically feasible to have, say, 60 million 'antennas' in the UK, noting that each one is also a transceiver and needs a broadband internet connection. On the face of it, if this were a regular terrestrial mobile network, then no. But if every cordless phone, WiFi hub and other connected device were to form a DIDO network, then possibly. The point here though is not that DIDO is the solution but that having so many points of connection could offer a means of providing ubiquitous, high quality, wireless broadband. It's already clear that to satiate the demand for data, the only realistic solution available to mobile operators is to increase the number of sites they have. Having so many would clearly solve that particular problem. If you count WiFi hubs and hotspots as cell sites, one wonders just how many points of connection there already are.
What the white paper raises, is the spectre that, if demand for data grows in the way that many are predicting, the number of wireless points of connection that will be needed may be way beyond the most frightening nightmares of current network planners!
Jun 17, 2011
The asymmetry of mobile data might point to alternative bandplans for mobile spectrum, but is it too late to do anything about it?
At the 6th European Spectrum Management conference in Brussels this week, a number of speakers focussed on the difficulties which are caused by the asymmetry of mobile data, as well as on possible solutions for it. The problem is this: the vast majority of use of the Internet is asymmetric, and in particular the amount of data which users download is much greater than the amount of data which users upload. This has been recognised at a European level. Early drafts of Europe's Digital Agenda cited the need for 30 Mbps, symmetric connections. But the published version now only talks about 30 Mbps download speeds and reference to symmetry is gone. The asymmetry of mobile data is acknowledged within the majority of broadband radio standards which almost universally offer faster downlinks to uplinks (LTE for example, has a maximum downlink connection speed of around 100 Mbps but a maximum uplink of speed of only 50 Mbps). Of course, part of the reason for this is the difference in quality of connection supported by the weaker signals transmitted from handsets compared to base stations, but you can be sure that those who develop the standards would have tried harder to make uplink and downlink speeds more evenly balanced had they believed there would be a need to do so.
But the extent of the asymmetry is being shown to be much greater than the 2:1 ratio inherent to the standards. Plum (in some work done for Qualcomm http://www.plumconsulting.co.uk/pdfs/Plum_June2011_Benefits_of_1.4GHz_spectrum_for_multimedia_services.pdf) indicated that the downlink to uplink asymmetry is in the ratio of around 8:1 and other studies put the figure as high as 10:1. Streaming video (seen by many as one of the major consumers of Internet bandwidth in the future) is almost unidirectional.
With the exception of spectrum being used for TDD purposes, all currently licensed mobile FDD spectrum is split 50:50 between uplink and downlink. Qualcomm were touting their proposed supplementary downlink to add capacity for downloads using the 1.4 GHz spectrum they won at auction in the UK (and which is still available in most other countries in the world due to it being officially set-aside for L-Band DAB services which have not taken off). Even the European Broadcasting Union (EBU) were keen to highlight the potential additional downlink capacity that broadcast networks could offer if they worked more closely with mobile operators (instead of being at loggerheads with them over the valuable UHF spectrum they inhabit).
But this smacks of a case of 'closing the gate after the horse has bolted'. If the asymmetry of data is really 8:1, and technology has an inherent built in disparity of 2:1, it seems logical that there should be around 4 times as much spectrum given over to downlink compared with uplink. Within existing mobile bands (eg 900, 1800 and 2100 MHz) there is little to no scope to do things differently, but have regulators made a boob in thinking about how to split up new bands? The Digital Dividend (790 to 862 MHz) is currently split into 2 symmetric blocks of 30 MHz plus the necessary guard bands. A 4 to 1 split would instead have suggested that 12 MHz should have been given to uplink with 48 MHz used for downlink.
The situation at 2.6 GHz is even more acute. As it stands, there are 2 equal blocks of 70 MHz (plus 50 MHz of TDD spectrum). Surely this should have been more like 30 MHz uplink and 120 MHz downlink with TDD squished in the left over bits (which could also be used as downlink only).
Of course, proponents of TDD technologies would argue that their services have the flexibility to re-assign up and down link capacity on an 'as needed' basis and that the correct solution would be to assign all new spectrum for TDD use. But this may not be efficient, as the need for additional 5 MHz guard bands between operators would reduce the effectiveness of the arrangement. One of the most beneficial uses for TDD spectrum (and one which gets around the need for 5 MHz guard bands) would be to use it only for downlink. If all base stations spent 100% of their time transmitting and 0% receiving (a perfectly valid combination for a TDD network), guard bands become unecessary and the TDD system becomes downlink only, which is kind of win-win for both FDD and TDD operators.
Ironically, in the UK, Ofcom's initial proposals for the auction of the 2.6 GHz band could have yielded results in which the amount of FDD spectrum was not 2 x 70 MHz as per international norms, but that this could be reduced in favour of assigning more spectrum to TDD. One of the flaws in this design was the need for inefficient guard bands between FDD and TDD networks (and between TDD and other TDD networks). But if these TDD blocks were downlink only the guard band problem goes away and the auction could have yielded the kind of 4:1 split in downlink and uplink spectrum that may be necessary - it would have at least opened up the opportunity for asymmetric up and downlink assignments.
So what can be done? As things stand very little. Current arrangements will inevitably lead to inefficiencies where uplink spectrum is underused compared to downlink spectrum. Perhaps using the 1.4 GHz band or using broadcast networks to provide additional data capacity might help a bit, but unless the realisation that asymmetric allocations are necessary takes hold quickly amongst policymakers, we will be left with suboptimal allocations that will inevitably lead to poorer or more expensive services as operators use other methods to solve the capacity crunch. Perhaps any 'Digital Dividend 2' should be downlink only (a.k.a. broadcast) to redress the balance a little - now there's a thought...