Technology

In this section, we define basic broadband terminology, and describe the different technology options for building next-generation local access networks, along with the strengths and weaknesses of the different aproaches.  We also highlight some of the technical issues around network installation and delivery of services to the end user.

Broadband projects rarely fail because of technology choices; they fail because of poor project management.  Nevertheless, choosing the right technology for a broadband project is vital because it determines the capabilities of the customer's connection, the ease and potential for future network upgrades, and has a huge influence on the project costs.

Primary author: 

Technology Primer

This page will help you distinguish your bits from your bytes.

For more technology definitions, see our Jargon Buster.

What is broadband?

When broadband first appeared in the UK in the late 1990s, it was characterised by two things:  it was always on, allowing customers to surf the internet and make phone calls at the same time, and the speed of data transfer was faster than that of dial-up modems.  Today the term broadband has become synonymous with always-on access to the internet, regardless of the technology used.

One caveat: although the term broadband is becoming increasingly diluted, it usually refers to the affordable internet access offered to consumers and small businesses; not to bespoke, high-capacity internet connections for the enterprise market.

What is superfast broadband?

Superfast broadband originated as a marketing term without a strict definition, but Ofcom is now using it to describe broadband speeds greater than 24 Mbps.  The significance of 24 Mbps is that this is currently the maximum possible speed for broadband over existing copper telephone lines.  However, it's worth noting that BT is marketing all of its fibre-based broadband products as "superfast", with a lower speed limit of just 5Mbps.

What is next-generation access?

The majority of homes and small businesses in the UK currently receive broadband services through the access network that connects them to their local telephone exchange via a twisted-pair copper cable.  The term next-generation access (NGA) describes a significant upgrade to the access network.

In NGA networks, some or all of the copper in the network has been replaced with fibre.  Since fibre is capable of sustaining much higher data transmission speeds over longer distances than copper cable, NGA is the key enabler for faster broadband.

It is generally accepted that NGA includes fibre-rich infrastructure and technologies such as fibre-to-the-cabinet (FTTC), fibre-to-the-home or premises (FTTH/FTTP) and upgraded cable TV networks.

There has been some confusion about the difference between broadband and NGA.  Broadband is a service that allows a connection to the internet; NGA is the physical cables and equipment to deliver the service.

Bandwidth, bits and bytes

The performance of a broadband connection is most often described by its speed, or bandwidth.  This is the amount of digital data that can be transmitted in a given time, measured in bits per second.  A bit is the smallest unit of information, either 0 or 1, in the digital language of computers.

Dial-up modems connected at 56 kilobits per second (kbps).  Today the average download speed of broadband connections in the UK is nearly 100 times faster at 5.2 million bits per second (megabits per second or Mbps), according to a study carried out in May 2010 by Ofcom with technical partner Samknows.

The total quantity of data, like hard disk capacity, is measured in bytes rather than bits, where a byte equals eight bits.  A typical email is just a few thousand bytes (kilobytes or kB), while standard quality BBC iPlayer requires a continuous 800kbps of throughput, so watching a 30 minute programme would consume 180 million bytes (megabytes or MB) of data.

A number of internet service providers (ISPs) in the UK have introduced bandwidth allowances, which place an upper limit on the total amount of data consumed during the month, typically 10 billion bytes (gigabytes or GB) for any entry-level broadband account.  Consumers exceeding their allowance may incur penalties, such as a surcharge on their bill or “throttling”, where the speed of the connection is reduced for a period.

A 10GB data allowance will allow hundreds of hours of basic web browsing, but it is not particularly generous for streaming video.  Future applications are likely to make heavier use of video.  For example, streaming a little over eight minutes of HD-TV at 16Mbps would consume a massive 1 GB.

Broadband speeds explained

Advertised speed is the speed that ISPs use to describe the packages they offer to consumers.  They are usually expressed as “up to” speeds because they are only a guide to the speed the ISP can provide. Few subscribers (if any) can get the “up to” speed of service advertised by internet providers, something that is the source of consumer dissatisfaction and much debate in the industry.

Line speed is usually the maximum speed a customer’s telephone line can support, which depends on factors such as distance to the telephone exchange and line quality.  The line speed will always be slightly higher than the speed the customer actually experiences because 10-15% of transmitted bits are protocol overheads to manage the connection.

Throughput speed is the actual speed a consumer experiences at any particular moment when they are connected to the internet. This figure is dependent on many factors, including the ISP’s traffic management policy, the number of subscribers sharing the connection (contention), congestion across the core of the internet, and the speed of the target website’s connection to the internet.  Poor in-home wiring and old computer equipment can also reduce the throughput speed.

An ISP doesn't have control over all of the factors affecting your broadband speed.  The ISP can tell you exactly what your line speed should be, and also controls the "contention ratio" in the backhaul, which determines the amount of capacity allocated per user in the connection between the telephone exchange (or equivalent) and the internet.

 

This article was originally published in the "Beyond Broadband" booklet.

[[wysiwyg_imageupload:6:]]

Primary author: 

Digital Subscriber Line (DSL)

DSL is a family of technologies that provides data transmission over the wires of a local telephone network.

Asymmetric Digital Subscriber Line (ADSL)

ADSL is the technology used to provide the first-generation of broadband connections over existing copper telephone lines, and has been deployed on a mass scale around the world.

Data is transmitted over the telephone line at frequencies that are too high for the human ear to hear.  A DSL filter, known as a “splitter”, fitted to the telephone socket inside the house breaks out the frequencies for voice from those used for data, and sends them to the correct piece of hardware (telephone or computer).  At the other end of the line in the telephone exchange, a so-called a DSL Access Multiplexer (DSLAM) separates the voice and data traffic so that it can be carried over the phone company’s separate voice and data networks.

ADSL, which is available in all but a handful of UK telephone exchanges, offers headline speeds of 8 Mbps, depending on what version of technology is available.  However, the speed a user actually receives depends on a number of factors related to the characteristics of copper phone lines.  ADSL works best the shorter the distance from the telephone exchange to the customer premises.  Other factors like the quality of the copper and connectors, aluminium cables in the network and line-sharing devices (DACS) also affect the service.  Hence it is estimated that around 10% of homes and businesses cannot get a 2 Mbps service from their connection and around 166,000 cannot get any sort of ADSL broadband.

21CN and ADSL2+

BT is in the process of rolling out 21CN (an abbreviation for 21st Century Network), which is long-term project to upgrade the core of the network so that it can carry both voice and data – for the simple reason that it is more efficient to manage one network rather than two.  As part of this programme, BT is replacing DSLAMs in the exchanges with new equipment than can support ADSL2+.

ADSL2+ has a headline speed of 24 Mbps, which can represent a significant bandwidth boost for some.  But, like all copper technologies, the speed of ADSL2+ depends on line quality and distance; beyond 3 km from the exchange there is no real speed advantage over ordinary ADSL.  An estimated 50% of telephone lines are capable of speeds above 8 Mbps, with the majority remaining in the 8–12 Mbps bracket.

Very high speed Digital Subscriber Line (VDSL)

VDSL is usually deployed in combination with fibre-to-the-cabinet (FTTC).

FTTC boosts broadband speeds by shortening the distance from the electronic equipment to the customer. This involves laying fibre-optic cables from telephone exchanges to green street cabinets or their equivalent, and installing faster VDSL2 equipment in the street cabinet to provide broadband over the remaining few hundred meters of telephone line.

The speed offered by VDSL depends on its “profile” which is essentially the set of frequencies used. The most common configuration in the UK today offers up to 40 Mbps download. As with other copper-based technologies, top speeds are only available for users located next to the cabinet. Speed decreases rapidly with distance from the cabinet, and at distances beyond 1 km VDSL2 offers ADSL-like performance. The average distance from the street cabinet to customer is around 300 m, so the majority of end users can expect to see broadband speeds in the region of 25 Mbps with this approach.

This article originally appeared in Beyond Broadband.

Primary author: 

FTTx Technology Overview

Coming soon...

Primary author: 

Cable Broadband: How It Works

By  Malcolm Taylor

Cable networks were originally established as unidirectional networks to deliver television and radio stations into the customers' home. Cable provided a high quality alternative to the aerial radio and television broadcasting that was often subject to interference. The old cable networks were fully coaxial cable based.

In mainland Europe, the earliest deployments started in the 1930s. Until the 1990s, there were thousands of small networks all over Europe but most of these are now consolidated into larger cable operators.

In the UK, cable networks started to emerge from the mid 1980s, following a policy decision by the government to liberalise the telecommunications market and create ‘infrastructure competition’ to BT, which was subsequently privatised. Although some UK cable networks were established during the latter half of the 1980s, it wasn’t until the early 1990s that cable network build really accelerated and in a 6-7 year period, over 50% of UK households were passed by new cable networks.

In mainland Europe, cable operators needed to upgrade their networks from unidirectional to two-way capability and invested extensively in fibre. In UK, because of the later start, extensive fibre was deployed from the outset.

As a result, most current cable networks contain significant levels of optical fibre, often to less than 100 metres from the customers' premises.  The final connection into the customers’ premises is coaxial cable. Consequently, the name 'hybrid fibre-coax' (HFC) network is used to describe the majority of modern cable networks. Based on this network structure, in addition to the traditional broadcast services, cable operators can now offer broadband Internet services in excess of 100Mbps.

How it works

The cable network comprises a number of elements – the headend, the fibre and coaxial cabling to the customers’ premises and the individual customer’s terminal equipment.

The headend is where the broadcast content is received, either from a satellite or a local TV antenna or sometimes via a direct fibre link from a studio.  The headend processes and assembles the content for onward delivery to the customer. It also connects with other  network and service providers.

[[wysiwyg_imageupload:18:]]

In a modern HFC cable plant, fibre optic cables carry the content (radio frequency signals) as light (optical signals) from the headend to optical nodes in the various neighbourhoods served by the cable network. The node converts the optical signals back to RF signals and the local part of the cable network distributes the RF signals to the customer, over the coaxial cable. Typically, local nodes serve between 500 to 1000 customers’ premises.

In addition, the HFC architecture enables the delivery of signals that originate in customers’ premises back to the headend. This two-way capability supports the provision of interactive audio, video and data services.

The local coaxial (or drop) cable is connected to consumer electronics equipment, often referred to as CPE (customer premises equipment), inside the home. This equipment (such as television sets, set-top boxes, cable modems and personal video recorders) processes the cable signals and enables subscribers to view, record, and interact with those services.

Most cable operators provide set-top boxes and cable modems (that connect to the HFC plant to provide always-on, high-speed access to the Internet) as part of a subscription package.

In addition, telephone services are offered on cable networks using a “telephony over IP” protocol  (which is based on EuroPacketCable 1.0/1.5 standards). Now, many cable modems incorporate the telephony function and are increasingly wireless routers.

How much capacity?

Typically, HFC cable networks carry multiple television channels, radio and telephone services, video on demand (VOD) and broadband Internet services using the range of UHF spectrum. In most cases, this extends to 862MHz but cable networks will operate up to 1GHz as more capacity is required to meet growing demands from customers for bandwidth hungry services.

In terms of broadcast services, multiple channels are available, and these are comparable to those delivered by direct to home (DTH) satellite.

Broadband Internet services are provided using a technology known as EuroDOCSIS, the latest version of which (EuroDOCSIS 3.0) allows data speeds of 160 Mbps downstream and 120 Mbps upstream, which is, at least four times faster than the previous EuroDOCSIS version. These speeds are achieved by the ‘bundling’ or combining of a number of channels.

As EuroDOCSIS 3.0 has no limit in how many channels it can bundle, the speed for data communications via cable will progressively increase to multiples of 160Mbps and 120Mbps.

EuroDOCSIS 3.0 also accommodates the increased demand for IP addresses by integrating the new Internet Protocol version 6 (IPv6). The demand for more IP addresses is generated by an array of new Internet enabled devices (laptops, PVRs, mobile phones, etc.).

Strengths and weaknesses

The key strength of a modern HFC cable network, when compared to other mainstream broadband technologies, is the extensive use of fibre optic cabling, deep into the local community, which allows the provision of significantly greater numbers of broadcast services as well as very high speed broadband, based on the latest generation of EuroDOCSIS technology referred to above.

The combination of fibre to local nodes and coaxial cable drops to the customers’ premises, as opposed to the twisted pair cables in older incumbent telephone networks, means that significantly more bandwidth is available. At present, the coaxial drop is adequate but the current HFC structure also provides a very good base for cable operators to extend fibre into the home to further increase capacity.

Another potential benefit of cable networks is that, with local headends, more localised broadcast and other services can be provided.

Next-generation broadband

Any network, large or small, particularly those serving local communities, has to provide a range of services that customers seek – particularly in a competitive broadcast and broadband market. To make a wide range of services available to customers, all networks have to interconnect with other networks and with content and service providers. In this respect, new networks need to deploy technology that will support the transfer of content and services across network interfaces and also look to find areas of mutual interest with other network operators.

As far as the UK is concerned, the existing mainstream cable network operator, Virgin Media, reaches between 50–55% of households, compared to BT’s universal coverage.  New community projects can offer the opportunity for cable to extend its reach, whilst customers within the community can benefit from the service range that cable technology offers.

Primary author: 

Wireless Broadband

Coming soon...

Primary author: 

Satellite Broadband

By Mike Locke

Evolution of satellite systems

Networks and businesses have been using satellites for data for as long as there has been an internet. In fact, the first ever internet connection into the UK was carried over the Atlantic by satellite – albeit at only 9kbps.

Several things have changed in the intervening four decades: speeds have increased to tens and hundreds of megabits per second, and costs have come down dramatically.  Today, it’s possible to buy your own two-way satellite connection for less than £300 and subscribe for £25 per month. That means that “proper” broadband is now accessible to everyone in the UK no matter how remote from their telephone exchange or fibre backbone.

The frequencies used by satellite, both for data and for television, have also risen from C-band to Ku-band and some now in Ka-band. (Definitions vary but in a satellite context, C-band is around 3.6-7GHz, Ku in Europe is usually taken as 10.6GHz – 12.75GHz and Ka is above 26.5GHz). The higher frequencies are made available as technology develops and they are needed because services soon fill up frequency bands as they are made available.

The move to digital systems for television saw a whole new set of standards developed under the umbrella of Digital Video Broadcasting (DVB). Digital TV is, obviously, a digital transmission system and DVB technology has been adapted to carry internet data as well as digital TV data. This means a satellite broadband can share much of the same infrastructure as a satellite TV system and also the customer equipment can share many of the same components and software.

The commonality of technologies has enabled a much lower cost base for many satellite internet networks and hence today most satellite broadband systems aimed at consumers will be largely based on DVB.

The equipment

A satellite network has two main components: the satellite itself in orbit and the dishes and systems back on earth – the so-called “space segment” and “ground segment”. For a TV system, the ground segment will consist of the operations systems, which control the satellites in flight, the uplinks that send the TV signals up to the satellites and the various links to receive the TV broadcasts from the broadcasters; plus, of course, the dish and satellite TV receiver at the viewer’s house.

Add the ability for the customer to transmit as well as receive, driven by a suitable satmodem, and link the operations hub to the internet backbone and that, in simple terms, is a satellite broadband network.

Until the advent of DVB-based systems, the customer premise equipment (CPE) was quite expensive and needed a 1 or 2-metre dish, relatively high power transmitter and a specialist satmodem. Usually, it would take a two-person team half a day or more to install and setup.  These expensive installations are still used in certain applications, but most consumer satellite broadband systems nowadays use a low power (no more than 2W) transmitter and a dish no more than 75cm in diameter. The CPE is easy to install by a single person and quick to get going with automatic commissioning and a simple Ethernet connection to the computer or router.

Satellite advantages

The main advantage of satellite broadband is that it is available just about anywhere you can see the southern sky (it has to be the southern sky as geo-stationary satellites orbit the Earth around the Equator). That being the case, satellites have long been used for communications in remote locations such as oil rigs where running cables was simply impractical or prohibitively expensive. Other networks use satellites where they want direct connections or just don’t want to share infrastructure, perhaps for security: National Lottery terminals or car dealerships being common examples.

There is still a trade-off between price and availability as the space segment, and hence the bandwidth carried by it, is relatively expensive. Terrestrial broadband is cheaper to use once the cable or phone line has been laid. However, since laying new cable can cost the operator up to £100 per metre, a satellite installation may have a lower upfront cost, plus it’s quicker to install.

Speeds over satellite are typically 4–10Mbps, but can be up to 100Mbps. At one of the mature satellite positions such as ASTRA 3 at 28.2ªE, there is more than 4GBs of aggregate capacity available, with more to come as compression technology continues to improve.

Because of the economics of satellite, it will never be as cheap as a connection to an existing network a few kilometres away from the exchange. However, recent advances mean that a perfectly reasonable speed of 1–10Mbps can be had for around £20–£25 per month.

The fundamental advantage of satellite broadband is that you can have it installed within a few days and get online with reasonable speeds just a little more expensive than the UK average. If you’re in a location where terrestrial broadband still hasn’t arrived, satellite can connect you straight away.

Broadband is not the only service that can be delivered by satellite; the obvious service that can be received on the same dish, as long as your dish is pointed at the right satellite, is digital TV like BSkyB and Freesat. Some providers can also offer a VOIP service with a UK phone number.

Satellite disadvantages

All internet services have issues with contention and resource restrictions. That’s in the nature of shared access services and can only be avoided by guaranteed – and expensive – committed information rate or leased line services. Satellite broadband usually has tighter restrictions than terrestrial services simply because of the higher cost of providing the bandwidth.

However, users have the choice of different packages to match their requirements as closely and as economically as possible. For example, there are packages with unlimited data but a gradual throttle for overuse; packages with no throttle and a set amount of data each month; packages with a limit during the day but unlimited overnight downloads and so on. The important point is that the user needs to take a little more care to choose and make effective use of the package that’s right for them.

Satellite services are based in different countries and so it is important to check that the service you choose has a UK IP address. That means you will automatically get the UK version of websites such as Google.co.uk and not be excluded from country-specific services such as BBC iPlayer.

And, of course, there is latency.  Latency is the “round-trip” time for a packet of data to go from the user over the connection to the computer being visited and then back to the user again. Since the satellite is in orbit some 36,000km high, the signal takes just over one tenth of a second to reach it and another tenth to come back to earth again. Even at the speed of light this introduces a minimum time of 440 milliseconds into any satellite connection.

Some satellite systems use acceleration techniques which wait for all the webpage to be assembled and sent across as a single transmission rather than requesting one file at a time – so it takes longer to start the download but finishes sooner.

For most applications, latency is not a major issue. But for applications such as real-time gaming where half a second is the difference between being shot and diving into cover, then satellite will suffer because of the latency.  Some virtual private network (VPN) systems can also have problems. The VPN prevents satellite system software from altering the private data, so a VPN cannot benefit from satellite system acceleration.

Other satellite systems

This brief article has restricted itself to services carried on geostationary satellites direct to the consumer. There are other services on low Earth orbit (LEO) systems but these are significantly more expensive both for the equipment and the data.

In the past, due to the cost of the satellite terminal, it used to be economic to install a single terminal in a community to act as a hub and then use local connectivity such as Wi-Fi to share the satellite connection to homes and businesses. However, now that the CPE is so cheap and easy to install, the benefits of the communal approach are outweighed by the complexity. It is much simpler and cheaper to have a dish and satmodem each with no need to share any connection.

In conclusion

Satellite broadband services delivered by geostationary satellites have truly now come of age. They offer a good solution, ubiquitous and reasonably priced. They make no claim for the latest superfast 100MBs speeds at a rock-bottom price: for that you will either have to wait a while for a new technology to deliver or move house. On the other hand, if you like where you live or work and want to get connected today, then satellite will deliver.

Primary author: 

Installing fibre-optic cables underground

By Neil Bradley, Fibre Options

Analysis shows that between 60% and 80% of the capital costs of a fibre project are due to civil work, ducts and cables. In other words, the cost of digging holes and filling them in again.

There are ways of getting round these costs, such as wireless transmission, overhead poles, and so on, but in the main if a future-proof network is to be employed then only fibre will do the right job.

Costs for digging can vary enormously, from £5 per metre to £100 per metre depending on where you are going and what disruption you are incurring. If permissions have to be granted they can depend on the traffic control of diversion of that area, which is costly. If the digging is in soft areas and reinstatement is not a problem then costs are low. If the dig can be achieved by slot cut with a very narrow channel then costs are about £25/metre. Costs then escalate up to £35 to £50 per metre for cutting into the pavement and could be £100 per metre in the main carriageway.

Over the past few years, lower cost alternatives to traditional trenching have emerged.  Here we will introduce some of these methods.

Micro-Trenching

[[wysiwyg_imageupload:15:]]

Micro-trenching is particularly suited to roadways and sidewalks where utilities are already present beneath the road surface.  It requires only a shallow trench, typically about 15 cms deep, which does not penetrate beyond the surface layer of the road.

Advantages: Significantly faster and less expensive to deploy than traditional trenching - approx. 35% less. There is less damaging to existing roadways.  Less depth also means that cables are closer to the surface, easier to get to and fix if there is a problem.

Moleploughing

[[wysiwyg_imageupload:12:]]

This installation method is suitable for burying cable or sub-duct in rural verges or across farmland. Specialist machines ‘plough’ a slot directly into the ground and lay the cable or sub-duct into the slot immediately, in one continuous operation.  The ground then closes over the slot and needs no re-instatement.  

Advantages: Significantly faster and less expensive to deploy than traditional trenching - typically 40% cheaper.
Disadvantages: Moleplough products are generally somewhat tougher than standard designs in order to match the heavy duty installation method.

Directional Drilling

Directional drills are relatively compact, allowing them to get into tight spaces and to be placed at the side of a road without impeding traffic.  A small crew is required: a drill operator and locating equipment operator.  The locator operator electronically tracks the progress of the drill head beneath the surface using a hand-held locator.  He also gathers data from the sonde located in the drill head behind the drill bit.
The sonde gathers data such as location, depth, roll angle, pitch, and temperature to help the driller adjust the direction of the head and control the bore path.

Advantages: Clean, trenchless solution without disturbing the surface above, leading to cost savings in excavations, reinstatement costs.  No more need to apply for road-opening permits for public road works.

Impact Moling

Unlike horizontal directional drilling – which can be guided – impact moling works in straight lines, and requires both a launch pit and a reception/catch pit. In the launch pit, the mole is lined up with the catch pit and then set in motion. Impact moling has many applications including the renewal of lead water pipes, the installation of utility pipework and cable-laying.

Advantages: Suitable for all soil conditions except rock. Minimal or no excavation beyond the necessary connection pits, and minimal disruption to the customer and customer’s property.

Primary author: 

Using overhead distribution lines to carry fibre optic cables

By Jim Rowe, AFL

Every village and small community in the UK is connected to the electricity distribution network, and in most cases this connection is carried on wooden poles stretching out across fields to the nearest electricity substation. Usually these poles carry two or three electricity conductors and would potentially make an unobtrusive and convenient way to install fibre-optic cables to carry broadband connectivity – providing power and Internet over the same poles.

[[wysiwyg_imageupload:17:height=331,width=220]]Electricity is best carried over long distances at high voltages, and then the voltage is reduced in steps as the power is brought closer to houses and businesses. The National Grid operates at 400,000V with conductors carried on massive steel pylons from power stations to primary substations where it is converted to 132,000V for regional distribution. The next set of sub-stations converts the power to 66,000 or 33,000V. Voltages of 66kV and above are usually carried on steel lattice towers and voltages of 33kV and below are normally carried on lines supported by wooden poles. As the voltages come down, the sizes of the support structures get smaller, the number of conductors gets less and the height above ground level decreases.

The next step is to convert the electricity into 11,000V to make the connection to villages and then finally there is a step down to mains voltage for connection to individual properties. Often, all of these final links in the electricity distribution chain are carried on poles above ground with connections to houses at roof-top height. Properties built in the last 25 years will have all of their services connected underground because this has been planning policy since the 1970s, but even so few properties are more than 100m from an overhead electricity distribution line.

Power utility companies are big users of communications to operate and control their networks, and many of the bigger transmission lines already carry fibre-optic communications cables to provide connections between major sub-stations and control centres. Distribution lines radiating outwards from substations towards towns, villages and consumers have, for the most part, not been equipped to carry communications cables because there has been no requirement for the power companies to do so in these parts of their networks.

The same technologies that have been used since the 1980s to add fibre-optic cables to large transmission lines can also be used on medium voltage and low voltage distribution lines, and this has attracted the attention of organisations that are planning to build broadband networks. These lines reach right into the target communities, providing both the means of connecting to the broadband service providers’ infrastructure and also the means of distributing connectivity to individual consumers within the community.

The key advantages of using overhead electricity distribution lines to carry cables providing broadband connectivity can be summarised in three distinct areas: speed, security and cost

  • Speed: It is always much, much quicker to install fibre-optic cable by attaching it to poles than it is to dig trenches to bury it underground. Directional drilling or ploughing are alternative ways of installing underground cable, but these are also slow and expensive compared to installation on overhead lines. Circumstances will vary according to the time of year with factors including weather conditions, whether or not crops or animals are in the fields and what the ground conditions are like underfoot; however it is generally possible to install at least 1km of fibre optic cable a day on overhead power lines and up to 5km per day is possible in favourable circumstances.
  • Security: is a key concern in any fibre-optic cable installation. Cables have been installed on overhead power lines since the very early 1980s and have developed an excellent reputation for security and reliability over that time. Power utilities use these cables to carry critical communications for control of the electricity network. Fibre-optic cables installed above ground are not subject to ‘dig-ups’ which is the biggest cause of cable damage in the UK. Cables that are installed as part of the electricity infrastructure are protected by the proximity of power conductors, which provide protection against theft and vandalism.
  • Cost: The higher unit cost of aerial cables compared to underground cables is more than offset by the much lower cost of installation and therefore aerial cables have the lowest total cost. Aerial cables have much higher installation rates and so networks are built much more quickly, begin to provide services earlier and so have quicker returns on investment. Put another way, with reduced initial costs and earlier in-service dates, aerial cables have shorter pay back times than underground networks.

Several technologies are available to add fibre-optic cables to overhead power lines: ADSS, OPPC and AccessWrap. The choice of which to use will depend upon the type of overhead line.

ADSS (All-Dielectric Self-Supporting) is the simplest concept for aerial fibre-optic cable: it is an underground fibre optic cable made stronger to allow it to be installed by attaching it to a series of poles. The cable needs to be physically strong because it will be supported only at each pole along the route and will have to support its own weight across the half-span on each side of the pole. This is in contrast to an underground cable which is fully supported inside a duct or in a back-filled trench along its whole length.

In addition to its own weight, ADSS cable must support the extra loads imposed by wind pressure and by the build up of ice when this is problem in exposed locations. These extra loads can be significant and require carefully designed clamps to spread the mechanical strain over several metres of cable at each pole to prevent any risk of damage.

ADSS cables have the advantage that they are completely independent of the electricity supply network, even though they are installed on the same poles. Potentially the two networks can be owned, managed and maintained by different organisations, although there are safety issues when people carrying out installation and maintenance activities are working in close proximity to live electricity conductors. This will inevitably mean that communications technicians working on the fibre-optic cables will have to be trained and certified by the electricity industry to work on energised power lines.

The main concerns regarding the use of ADSS are related to the amount of load exerted on the supporting poles and the clearance between the ADSS cable and objects around it, be they trees close to the line, traffic or farm vehicles passing underneath or the electricity conductors on the line itself. Since the local landscape changes from line to line and since there are many different designs of poles in use, in some cases it will simply not be possible to install ADSS into a suitable location to provide a secure and reliable installation.

OPPC (Optical Phase Conductor) is a replacement electrical conductor that has optical fibres built into it as part of the manufacturing process. The fibres are inside the conductor, usually contained within a stainless steel tube. OPPC is installed on an overhead electricity line in place of one of the normal conductors.

OPPC replaces one of the normal conductors and therefore it adds nothing to the appearance of an overhead line and it does not affect the mechanical or electrical rating of the line. From this point of view, OPPC is the least obvious and most secure of all of the cable types. However it is also the technology that is most intimately associated with the electricity supply network as it physically forms part of this network. Any maintenance activity on either the communications or power network involving OPPC will have an impact on the operations of both networks.

OPPC is normally only installed as part of the construction of a new line or during the complete refurbishment of an existing line and so it is unlikely that OPPC will be specified by any organisation other than a power utility company.

AccessWrap is a technique that installs a fibre-optic cable onto an overhead electricity distribution line by wrapping it securely onto one of the power conductors. This is a scaled down version of the SkyWrap process that has been used since 1982 to install fibre-optic cables onto power transmission lines; the smaller, lighter AccessWrap machine is designed to work on power lines supported by wood or concrete poles and with conductors spaced only 0.5m apart.

The optical cable is supported by its host conductor and so it does not need to carry any of its own weight. Therefore it can be very small and this means it has little effect on the mechanical and electrical performance of the overhead line; it also has little impact on the appearance of the line. Installation is carried out using a special device which travels along the host conductor carrying a drum of optical fibre cable. The device rotates as it moves and wraps the cable under carefully controlled tension onto the host conductor at a pitch length of about three quarters of a metre. Clamps are used on each side of each pole to hold the cable in place on the conductor. The machine moves at about walking pace with about 15 minutes required at each pole to lift the machine onto the next span and put the clamps in place.

AccessWrap does not place extra load on the poles supporting the power line nor does it reduce the ground clearance under the line and these are major advantages over ADSS in some circumstances. However, AccessWrap is wrapped onto one of the power conductors and therefore it is much more closely associated with the power network than ADSS. Even so, evaluations carried out in the early part of 2011 by several utilities in UK and Ireland have shown that routine maintenance tasks on overhead distribution lines such as replacing insulators and transformers can be carried out without disturbing AccessWrap.

[[wysiwyg_imageupload:16:]]

These products create an opportunity for power utilities to roll out communications networks on to their electricity distribution infrastructure, potentially connecting all the way through to the users’ premises and linking these to headends at major substations or regional control centres. This type of infrastructure may be required to provide the communications networks to support Smart Grids, and utilities may build these networks for this purpose only. However, once built, such networks would support other applications and could generate revenue opportunities in providing carrier services to third parties such as broadband service providers and mobile operators. The combination of Smart Grids that enable utilities to meet Green Agenda targets and access to additional revenue streams from existing assets may provide sufficient encouragement to utility companies for them to begin building these networks. If and when this happens, broadband connectivity would be extended to many isolated communities right across the country.

Primary author: 

Community Hubs

Coming soon...

Primary author: 

Tackling the Backhaul Question

By Annelise Berendt, Point Topic

Accessible, affordable, high-speed backhaul has been identified as key to bringing next-generation broadband services to the UK’s rural and remote communities. These locations tend to suffer from lack of access to backhaul provision because they are usually some distance from their nearest BT exchange and are situated in areas not served by other commercial providers.

The importance of backhaul was highlighted by the Coalition Government in its broadband action plan published on 6 December 2010. “Our aim is to ensure every community has a point to which fibre is delivered, capable of allowing the end connection to the consumer to be upgraded – either by communities themselves, or since this will make the business case more viable, industry itself might choose to extend the network to the premises.” The plan, entitled “Britain’s Superfast Broadband Future”, proposes a “digital hub” in every community by the end of this Parliament (in 2015) and Broadband Delivery UK (BDUK) is to explore the viability of the approach at a local level. This builds on the idea of the “digital village pump” first coined by community interest company NextGenUs UK in 2010.

The Digital Scotland Report, published on 26 October 2010 by The Royal Society of Edinburgh, explores backhaul provision in greater depth. “Lack of backhaul capacity limits the provision of local access, the delivery of next-generation speeds to homes and businesses, and the rollout of mobile data services.” A number of remote communities have built their own high-speed local access networks but have limited speeds as a result of sharing a slow backhaul connection. In Scotland these include Tiree, Eigg and Knoydart. The report adds that a high-speed backhaul infrastructure would stimulate investment to build new local access networks as well as benefiting those that already exist.

Proponents of better and particularly fibre backhaul cite not only its beneficial effects on next-generation local access network provision but other benefits including greater efficiency in public services and enabling mobile operators to roll out 3G and LTE 4G mobile broadband offerings.

Industry has highlighted a number of ways in which the cost of both backhaul and access network construction could be reduced, namely sharing existing infrastructure, deployment of new overhead infrastructure, microtrenching and sharing streetworks. Other approaches on backhaul are also coming to the fore, the most interesting of which are demand aggregation on alternative infrastructure and the use of public sector networks.

In this short report we identify the options for providing backhaul to communities seeking next-generation broadband speeds, particularly those in remote areas. We look at the cost of providing backhaul and some of the products available today together with what is expected to be available in future. The emphasis is on fibre-optical solutions.

Defining backhaul

Backhaul is the connection over which traffic is carried from a local aggregation node such as a street cabinet or telephone exchange back to an internet gateway. It is sometimes referred to as the “middle mile” as opposed to the “last mile” or the local access network. Backhaul can be provided using different types of technology: fibre optic cable, fixed wireless radio and microwave technologies and satellite.

Essentially there are three flavours of backhaul – local, regional and national:

  • Local backhaul takes traffic from the primary connection point (PCP), back to a local aggregation point or node. Typically the PCP will be one of the green street cabinets operated by BT Openreach, used as a access point for a communications provider involved in sub-loop unbundling, and the aggregation point will be a BT exchange.
  • Regional backhaul collects traffic from the local aggregation node and delivers it to an aggregation point where a national backhaul provider has a point of presence (PoP). Here it connects to the national backhaul network. However, this [regional?] aggregation point need not be a BT exchange. Other providers including Cable & Wireless, Virgin Media and TalkTalk have similar connection points, as do some local authority networks.
  • National backhaul takes traffic from the regional aggregation point to a telehouse for internet breakout and onward delivery to the voice network. As above, the national link can be provided by various other providers besides BT.

The backhaul network needs to have enough capacity to serve aggregated traffic demand from the entire community it serves. End-users do not all use the network simultaneously but the network should still be able to handle peak hour demand.

The most likely approach for getting backhaul to a community deployment is for Openreach to provide a fibre as part of its Ethernet portfolio. Alternatively the fibre may be dug by a fibre-laying company, of which Openreach is one. Existing dark fibre may be another option although this is less likely to be available beyond urban areas and national routes.

Alternatively wireless technology could be used to provide the local backhaul element using 5.8GHz radio. This would involve conducting line of site surveys and sourcing suitable premises for masts or erecting poles, together with gathering the required wayleaves and landlord commitments. Both fibre and wireless approaches have been employed by Rutland Telecom, for instance, which uses Openreach fibre for its Lyddington sub-loop unbundling deployment, and point-to-point radio for backhaul from a number of smaller villages in Rutland.

Costs of backhaul provision

The problem for many rural and remote communities is that the local backhaul element simply does not exist in any readily accessible commercial form. The effect of geography and distance means therefore that backhaul provision comes at a high price. The cost of backhaul varies depending on the individual circumstances of deployments. Anecdotal accounts of specific backhaul costs include those cited in the Digital Scotland Report of £140,000 per year for 34Mbps backhaul supplied by BT to the Connected Communities network on the Western Isles in Scotland. The report goes on to estimate installation and operational costs of £250 million over 15 years for the 2,500 km of fibre it says is needed to bring backhaul to Scottish communities of more than 800 homes.

To explore how significant the cost of backhaul is for rolling out NGA, Point Topic has calculated the implications of Openreach’s prices for backhaul projects to serve communities of different sizes over a range of distances. For local backhaul we assume communities at 1,000, 2,500, 5,000 and 7,500 metres from the serving BT exchange. We also consider how the costs per household or business look if they are allocated across 250, 50 or only 10 premises.

Each community is served by one PCP with fibre-to-the-cabinet (FTTC) deployment using sub-loop unbundling, putting VDSL2 into the cabinet. Thus an optical fibre is required to connect the communications provider’s (CP) cabinet, adjacent to the PCP, to the serving BT exchange. The prices for Openreach’s Ethernet Access Direct (EAD) products are used. EAD is due to replace Openreach’s current Backhaul Extension Service (BES) and Wholesale Extension Service (WES) products in June 2011. Prices include both one-off and annual rental elements, corresponding to the standard telecoms categories of capital and operating expenditure, capex and opex.

The differences in economic impact across this range are considerable. If fibre is already available and costs can be recovered from as many as 250 premises then the one-off capital costs would be quickly paid for and continuing opex would be quite modest per home or business, at only £20.30 per year even at 7.5km range. But recovery from as few as 10 premises gives opex per premises of £273.50 even at a short distance from the exchange, far beyond what is likely to be economic on a commercial basis.

The picture is less encouraging if new fibre has to be provided. Opex stays the same but capex is much higher, ranging from £23.80 for a home in a large and nearby community to £3,195 for one in a small community far from the exchange. And costs go up by another order of magnitude if a new duct has to be dug for the whole distance as well. Here the range of capex is from £163.80 to £29,445.

It is also important to remember that here we are looking simply at the cost of backhaul. The figures quoted are only small part of the total cost of providing a broadband service. They do not include, for example, the cost of the CP’s street cabinet or the cost of the unbundled tie cable from PCP to home among many other things. Legal and planning costs, exchange costs, marketing costs and a profit margin all need to be covered by the full price quoted to the end-user.

These simple calculations raise a number of questions without providing answers. What is a reasonable amount to spend on providing broadband to remote places? If my house is a few £100,000 cheaper because it is remote, would it be worth investing even £30,000 to abolish some of the disadvantages of remoteness? And what should the working assumptions be about the take-up of superfast broadband services in rural communities? Commercial CPs cannot afford to assume 100% take-up of a service or anything close to it, but it makes sense to assume 100% in cost-benefit analysis of a publicly funded project. In the long run the aim will be indeed to achieve 100%. Many homes will be users without appearing to access the internet as far as they are concerned, whether for streaming TV, telecare or smart metering.

Primary author: 

Network Capacity Planning

Internet service providers (ISPs) have historically talked about "contention ratios" when describing broadband. The contention ratio is the number of people sharing a given connection. Early ADSL services offered two levels. 20:1 for business and a cheaper 50:1 for consumers. Sharing a connection with only 20 people is clearly better than the higher number. Note that in this case the contention is at the telephone exchange – not on an individual’s line.

With the introduction of ADSL Max (and later ADSL2+ and FTTx), contention ratio disappeared from the language as BT changed to guaranteeing a certain throughput through the telephone exchange at peak times. Consumers could still pay more to get a higher throughput guarantee, depending on the package purchased.

Nowadays the contention has moved from the equipment in the telephone exchange to the backhaul into an individual ISPs core network as is described in this article. It isn't totally analagous to the contention ratio of old because instead of sharing 2Mbps on a 2Mbps connection you are sharing a connection that has much higher bandwidth than your local pipe. This difference is not visible to the end user, but is something that affects the overall quality of the customer experience.  Perhaps more importantly, bandwidth sharing on the backhaul connection must be taken into consideration by an ISP when planning network capacity.

Once the initial capital investment has been made, the most expensive ongoing element of a broadband service is the backhaul.  This is why a typical ISP provides packages with usage limits. To a rough approximation, the more GigaBytes you use the more it costs the ISP. Connectivity is however measured in bits per second not bytes so how does an ISP decide how much bandwidth it needs in terms of bps?

For an existing ISP planning to expand its market share and take on more customers this will be straightforward.  It divides its existing peak backhaul bandwidth usage by the number of customers (known as tails) and gets an average usage per tail. It then uses this average figure to calculate the total additional bandwidth needed for a given number of new customers.

There are things to look out for here. First, consumers normally use less bandwidth than businesses. This is almost certainly down to the fact that there are likely to be more users sat behind a business broadband connection than a connection into someone’s home.

All ISPs will have different metrics, but a figure of 40kbps for consumers and 70kpbs for business is a reasonable average. It should be noted that these numbers are constantly growing in line with increased online usage. The rule of thumb has traditionally been is that internet usage grows by around 50% per annum. So 70kbps today is likely to be 100kbps in a years time. This is not necessarily an indicator of future growth.  Increasing usage of HD video, for example, could completely change the metric.

The other factor driving usage is the speed of the local delivery technology. Someone with a 2Mbps ADSL connection is going to use less bandwidth than someone with 8Mbps ADSLMax, which in turn uses less than than a 24Mbps ADSL2+ connection, and so forth.  Each jump in technology has resulted in a growth of usage of around 30 to 50%.  These figures are all rough orders of magnitude because they will be different for each user community.

It gets worse - at least from the ISPs perspective. As local access speeds increase the minimum backhaul bandwidth required to service a community becomes far higher. For example a 2Mbps connection will require as a minimum a 2Mpbs backhaul otherwise it can’t possibly be a 2Mbps connection. Similarly a 100Mbps circuit needs a minimum of a 100Mbps backhaul.

However a 100Mbps backhaul is far more expensive to provide than a 2Mbps connection. The barrier to entry has just been raised. A 100Mbps Ethernet pipe will likely cost in the region of £20,000 per year, so the ISP needs to find a critical mass of users willing to sign up for the service to cover this cost.

The calculation of how much backhaul bandwidth you need to provision therefore starts at the maximum speed of a single local access circuit. A single 100Mbps backhaul serving 100Mbps fibre-to-the-home connections will potentially be sufficient for as many as 1,000 connected tails or more as they are not all using the network to its maximum rating at the same time.

It isn’t rocket science to work out that two 100Mbps users trying to use the maximum speed of the connection at the same time should only get 50Mbps each off a 100Mbps backhaul. Whilst this may be true, in practice users are never using the full capability of their connection, and this is always one of the determining factors of ISPs offering “up to” speed packages.

As users are added to the network the ISP will monitor usage, and if it sees maximum capacity being reached regularly with resultant network congestion it will increase the bandwidth available. As the number of connections grows, the required backhaul bandwidth can be based on a usage per tail figure that the individual ISP will have calculated based on its local experience.

Controls can be put in place to mitigate against usage abuse and growth. For example, someone continuously using a bit torrent to download files from the internet could permanently saturate a link and degrade everyone else’s service. Some ISPs will apply a limit to the amount of bandwidth that can be used by protocols such as bit torrent so that a satisfactory overall service is provided for other users.

The inclusion of this type of functionality in a network needs to be designed into the original architecture. It is also something that requires transparency in the commercial terms with customers.

Primary author: 

Careful planning keeps costs under control

Guaranteeing operational quality while reducing expenditure is the ongoing objective of all telecommunications network operators. Edgar Aker of Draka Communications, now part of the Prysmian Group, explains how a combination of design software and innovative products can significantly reduce total cost of ownership.

As a leading cable manufacturer, Draka engineers have seen from hands-on experience how next-generation telecommunications networks are driven by innovation. They have also witnessed the determination of operators to reduce capital expenditure (capex) and operational expenditure (opex) to produce a lower total cost of ownership (TCO).

Getting more for less may seem like a tall order, especially in tough times, but approaching the design of a network from the top down and building it from the bottom up will provide positive results.

Top down design

The modern telecommunications network can be seen as a three-tier pyramid, with the passive infrastructure at the bottom supporting the active network in the middle, with retail services at the top. The passive infrastructure provides the foundation, and the layers above rely on it to ensure optimum quality of service.

With this in mind, a network should be designed from the top down. For example, if a passive optical network (PON) is used then the other two layers should be designed accordingly. However, when it comes to network build it is important to take a bottom-up approach by considering all the components available as well as physical limitations such as duct sharing, rights of way and local registration.

The service side of the network is constantly changing and developing as network technologies progress. If enough attention is paid to specifying the right passive infrastructure, it can last between 20-30 years, enabling the active network to be future proofed for three to four years. Therefore, from a business case perspective, cost calculation should focus on the passive layer using careful design and planning to reduce capex and opex.

It is important to remember that there is no "one size fits all" solution. A network should be designed according to local requirements, and operators need to carefully define what they want and expect from it – whether it is low latency, large bandwidth or reliability. Only then should the technology and topology be chosen. 

Software control

Reducing the capex of a passive infrastructure involves reducing the costs associated with installation, civil works, optical fibre cables, connectivity, network engineering and project management. Successfully limiting the expense associated with these various elements requires a holistic approach to network planning and special design software, such as Draka’s XSNet Network Software Suite, can help operators specify the most appropriate network concept.

Design and planning software creates the most cost-effective network by automating, sequencing and simplifying components and processes as much as possible. By incorporating intelligent mathematical algorithms, users can change parameters and design various network concepts within minutes. The software eliminates the need to "guesstimate" material requirements, which means no more having to redo preliminary drawings or cost calculations when a project gets the go-ahead. 

Getting the design right keeps costs under control, while optimisation tools ensure that the exact quantities of materials are ordered. By employing a smart planning approach, digital information can be used to analyse and visualise various network scenarios quickly and easily, while survey information can also be used to create detailed lists, drawings, working reports and schematics. 

By investing in digital maps users are also able to identify existing infrastructure and avoid on-the-job changes to plans, making sure that material and labour costs do not increase once the project has started. If the design specification does need to change the software calculates and redesigns the network automatically.

Time is money

Labour can form as much as 38% of the total cost of a network build, and on-site labour is notoriously difficult to budget for. However, once the design of the network infrastructure has been finalised, there are a number of methods to reduce the expense associated with it. 

Over recent years we have seen increased demand for prefabricated points of presence (POPs). A prefabricated POP is built in a factory controlled environment, which means that it can be fully checked, tested and signed off prior to being delivered to site. Once delivered, it can be positioned on a concrete foundation and then the cables and/or ducts can be connected quickly and easily.

A prefabricated POP offers the highest flexibility and ultimate network stability to meet the needs of urban installations. It can be pre-planned and inserted into network infrastructures, providing a secure facility housing servers, routers, ATM switches and digital and analogue call aggregators. Pre-fabricated POPs also help network service providers achieve an optimum POP cost/connection ratio for densely populated areas.

Increasing the density of connections and reducing the size of the cables, patching products and associated components, can further reduce the cost of POPs, while the use of bend-insensitive fibre-optic cables can also improve handling while reducing installation time.

Speedier installation of outside plant can make a huge difference to a project’s overall expenditure and this can be achieved in a number of ways, including eliminating on-site splicing and simplifying installation techniques using the latest plug and play technologies. Digging costs can also be minimised with the use of smaller cables and connectivity to reduce civil works.

Innovative solutions are also available to reduce the time and labour costs of indoor installations and eliminate the need for splicing at the customer premises. By pre-fitting a fibre-optic cable with the ferrule of an LC connector, it can then be blown, pushed or pulled through microducts. Once it is located at the termination point, the connector housing is snapped around the ferrule and the cable connected.

Avoiding surprises

By taking a methodical approach to network infrastructure design and build, it is clear that opex and capex can be reduced. This means understanding rather than underestimating the role of the passive infrastructure and the use of smart engineering tools to optimise the capabilities of the network. Taking the time to use design and planning software and using the latest products and installation techniques will reduce the risk of unpleasant surprises, and can significantly lower TCO.

Primary author: