Quanta Technology Blog
21st Century Networking
Posted on: Aug 20, 2015
Distributed Secondary Networks (DSNs) are ubiquitous in the downtown areas of North American cities. They serve the vast majority of central high-rise business districts and institutional campuses and corridors in major cities throughout the United States and Canada. Many of these systems were put in place in the first half of the 20th century. More than a few have civil facilities – under-street cable ducts and manholes, and under-sidewalk transformer vaults – that are more than a century old. In most cases, these systems have been maintained and upgraded so that, despite their age, they do a satisfactory job of providing electric service in the core of large metropolitans. But they are expensive to maintain and operate, and very expensive to expand and upgrade. And of most concern, nearly all of these networks are old.
Many DSNs are very old and in slowly deteriorating condition and nearing the end of their useful service lives. As utilities look to the future, none of the apparently viable options for renewing the downtown look very appealing. All the traditional options, including just replacement in kind, are expensive, difficult to engineer and build, with maintenance and service requirements that create safety and operating concerns. Some modern approaches may provide equivalent service quality at lower cost and with better operating ease and safety, but they are not completely proven technologies and are, thus, a real business and operating risk.
Distributed secondary networks were an early 20th century development that fit the needs and technological capabilities of the time. They could be built entirely underground in crowded downtown areas and serve very high levels of load density (for the time) with outstanding inherent service reliability. Most important at the time, despite their apparent complexity, they could be planned and engineered with only very limited data on loads and equipment and with nothing more than the hand-computations and nomographs that were used before computers were available.
Early 20th century design guidelines and practices, as for example found in IRE Transactions at the time or the earliest versions of Westinghouse’s (now ABB’s) Transmission and Distribution Reference Book, boiled down to grossly over-specifying required cable and transformer capacity (often by 3X). Basically, it all boiled down to "build it big enough and it can handle anything remotely expected." Such practices led to acceptable budget needs at the time, because tripling the "cost of copper" increased overall cost by only about 10-15%. The vast majority of cost for new underground networks was in building the civil facilities – ducts, manholes and vaults, etc. Furthermore, these networks could be safely maintained and operated using no more than the very limited monitoring, measuring and record-keeping systems available at the time. And finally, there simply was no competing type of system that better fit the needs of downtown-area service and industry technology. Networks became the preferred way to serve the core areas of major cities.
DSNs looked good to early 20th century utilities, but with the third decade of the 21st century only five years away, they look a bit less appealing. That extra "cost of copper" originally built into them might have been just ten to fifteen percent, but that is ten to fifteen percent of a lot of money. Modern computerized engineering tools can take any excess capacity and cost out of a network system, and many utilities have done so, bringing their DSN margins down to levels equal to other parts of their distribution system. But when that is done, networks reveal their true nature.
Like the proverbial bad co-worker who is easy to live with when asked to do only light work, but turns surly, high-maintenance and untrustworthy when pushed to work hard, networks changed their personality. Load flow and short-circuit results are highly sensitive to assumptions about cable length and power factors. Many utilities have learned to consider even the most detailed and intricate load flow analysis as only approximate, and to leave a good deal of capacity margin in place to cover the resulting uncertainty. In some cases, it is virtually impossible to determine where faults may have occurred, or find the cause of operating anomalies, without lengthy equipment outages and expensive investigation. Maintenance and service can be very expensive, hard to schedule, and often a safety concern. And expansion costs are through the roof and, when margins are small, frequent.
Finally, many networks fed by lower-voltage primary feeders – all of those below 10KV and, in some cases, even those below 15 kV – are not up to serving modern downtown load levels, so that major new developments require expensive customized solutions anyway.
Therefore, it is not surprising that few American utilities want to invest a lot in new or renewed downtown secondary networks, as they look at their aging networks and realize that eventually something must be done before they completely wear out. Frankly, many don’t want to invest a lot, period. Any viable solution to the "downtown network problem" is going to be expensive. So for many, the preferred solution is to patch-up the existing network in order to get a few more years of useful life out of their century old networks.
This kick-the-can-down-the-road approach should not be criticized too much. Keep in mind that when the approach is followed, the can is further down the road. The few years this approach buys gives many of the possible replacement alternatives (e.g., smart primary select spot networks combined with smart DSM, microgrids, site-specific energy storage) time to prove themselves and to work their costs down and their standards up to acceptable levels. So in many cases, it is not clear this isn’t the best approach.
But that "can" has been kicked only a few years down the road. Eventually, perhaps in only ten years, something major and very expensive will have to be done. There should be no mistake: older downtown networks will have to be replaced. Decisions need to be made now about when replacement of the networks will be started, so that the what, where and when can be planned, and the how-do-we-pay-for-it addressed with regulators and stockholders.
In thinking about their new 21-st century downtown replacement distribution systems, utilities need to consider many factors. Among them are:
The decision must be made sooner than one thinks. Distributed secondary networks are incredibly reliable when in good condition, often with a System Average Interruption Frequency Index (SAIFI) of less than once a decade or better. But a seldom-recognized characteristic is that their service reliability is an exponential function of cable, transformer and other component failure rates. The exponent is typically between three and eight depending on design. If the failure rate of an aging DSN’s equipment doubles, SAIFI will increase by somewhere between eight and two-hundred and fifty-six! Given that interruptions, while infrequent, are typically widespread and take many hours, if not days, to repair, waiting too long to replace an aging network is inviting catastrophe.
It’s not all about downtown levels of reliability. DSNs are arguably the most inherently reliable type of power distribution system that can be built. Smart and distributed resource systems can match their service reliability, but in a practical sense, only when "forced" to do so. That inherent reliability comes at a high cost – in the vast majority of cases, a renewed or all-new distributed secondary network is the most expensive alternative a utility can select, by a noticeable margin.
The incremental cost of reliability, as provided by Uninterruptible Power Supplies (UPSs), sets a practical and justifiable upper limit on how much a utility should spend in order to provide reliability via its power system. Societally, and from the customer's standpoint, it makes sense and is more economically efficient to serve sites, customers or individual loads that need high-reliability power via a UPS, and not build a high-reliability power system with a higher cost that will have to be socialized over customers and loads that don’t need those extreme levels of reliability.
Smart distribution alternatives offer lower cost and competitive reliability. This is true in many cases, but they are complex systems that take a lot of time to plan, engineer, design and build. A utility needs to give itself time to get it right if it expects these solutions to work efficiently and economically.
Microgrids and distributed resources can look quite appealing. However, failure rates, long-term costs and service lifetimes for some components on which economy and reliability depend, such as energy storage and LV power electronics control and switching, are not well proven and there is a risk. Regardless, taking this route can lead to a hodgepodge of different types of systems serving downtown. The utility deciding on this course needs to carefully think through the long-term business, engineering and investment implications, and create an infrastructure and resource base to monitor, manage and coordinate all those disparate micro-power systems in an effective business and operational manner, if it expects to take this route.
Distributed secondary networks need to be considered as an option. Yes, they are a century-old method of serving high-density downtown areas, but if renewed, they can work as well as they ever did and, in some cases, continue to meet the electric service needs in some metropolitan cores. However:
- Secondary networks fed with primary voltages below 12 kV typically cannot satisfactorily meet modern downtown load density needs. Networks fed at 2.4, 4, 8 and perhaps up to 13.8 KV typically do not make the cut in screening studies as viable alternatives for further consideration.
- Secondary networks built to serve modern load levels and fed from higher voltage primaries can have very high fault duties – up to 150,000 amps in places. In spite of the arc-flash and other safety concerns these fault levels create, they can be built, engineered and operated so they are safe. But that costs money, both initially and on an ongoing basis thereafter, which makes them even more expensive.
- Modern downtown loads are often so large that even when primary voltage is above 12 kV and large service transformers are used, an increasing portion of new loads require primary level service, or need so much power that planners find arranging for additional substation and feeder capacity is the major problem they face. Either way, the secondary network doesn’t really come into play. Over time, an increasing portion of planning, engineering, operations and investment decisions move to the substation and primary level anyway.
As a result, in the vast majority of cases Quanta Technology has seen, distributed secondary networks do not evaluate as the best alternative. But DSNs still need to be considered as the utility looks to the future, because in a limited number of cases they are a viable option, and in all other cases the utility needs to show that DSNs are no longer the best alternative for the future.
Get your ducts in a row. Regardless of what type of power distribution electrical system is finally selected for the utility’s future downtown service, it will need underground civil facilities. The majority of the cost, and the vast majority of constraints and limitations on the what, where, when and how of construction and cost, will depend on the utility’s network of under-street and under-sidewalk circuit routes and equipment stations.
Thus a useful, even necessary, first step in planning for the future – one that may need to be done now – is to study what the utility currently owns when age and the deteriorated condition of old civil facilities are taken into account (often next to nothing useful), what can realistically be built (when and where), and compare it to what would be needed for each of the various renewal and replacement scenarios discussed above. Such study identifies options the utility may not be able to afford or even obtain regardless of cost, and when and why it may need to begin installing new ducts now, even if planners are still not certain exactly what type of electric system will occupy that new underground real estate.
For all these reasons, while now may not be the time to begin building 21st century’s replacement for those near-100 year old networks, it is the time to start making firm plans to do so.
Posted by Lee Willis on Aug 20, 2015
Distributed Secondary Networks (DSNs) are ubiquitous in the downtown areas of North American cities. They serve the vast majority of central high-rise business districts and... Read more
Posted by Lee Willis on Jul 15, 2015
We received a good number of emails about last month’s blog on technology forecasting, so this month’s discussion will delve a bit deeper into some of the ways... Read more
Posted by Lee Willis on Jun 16, 2015
In early 1953, my grandfather bought a new Cadillac Coupe DeVille. I was a young child at the time and thought that it was the most wondrous thing I had ever seen, full of magical... Read more
Posted by Lee Willis on May 28, 2015
Twenty years ago, I did an interesting project for a friend who had just become COO of a new and rapidly growing cable TV company. He asked me to look at how his cable and... Read more
Posted by Lee Willis on Apr 22, 2015
Traditional electric utilities that do not adapt to a “private microgrid” world could be looking at eventually losing up to half their customer base.... Read more
Posted by Lee Willis, Damir Novosel on Mar 24, 2015
Renewable energy, distributed resources, smart equipment, microgrids, customer-centric energy plans and a host of other new concepts and technologies are changing our industry... Read more
Posted by Lee Willis on Feb 23, 2015
Writers of adventure and espionage fiction have always stayed a step ahead of the real world, but only a step. The best, or at least the most successful of them, seem to have a... Read more
Posted by Lee Willis on Jan 20, 2015
We are far enough into the current century that our present technology is noticeably better than it was at its beginning. Our latest automobiles are cleaner, safer and get better... Read more
Posted by on Nov 21, 2014
Just what exactly is going on down there? On July 14, 2008, at 8:54 a.m., the city of Vancouver, BC made national and international headlines after a... Read more
Posted by Lee Willis on Oct 16, 2014
For most of the past one hundred years, new technologies have entered the power industry "at the top", near the generation level, and gradually worked their way down to the... Read more