x
Gigabit Cities

Muni Policies Stymie Edge Computing

SAN FRANCISCO -- Incompas 2017 -- The drive to push intelligence to the edge of the network is being impeded by municipal policies and permitting processes that are expensive and time-consuming, an expert panel agreed here this morning. That could possibly change with a federal infrastructure bill that ties construction funding to use of model codes for infrastructure sharing and permitting.

Incompas CEO Chip Pickering said the Federal Communications Commission (FCC) is working through its Broadband Deployment Advisory committee to develop model codes for infrastructure sharing, including streamlined permitting processes.

"That is step one of the process, and companies like Sprint, Google Fiber and Rocket Fiber are part of advisory committees helping develop those model codes," Pickering commented. The next step would be Congressional action on an infrastructure bill that would tie use of the model codes to federal funds for building, highway and bridge improvements, he said, to change the financial incentives for municipalities that currently view leasing their rights-of-way and charging for construction permits as income.

That current thinking undermines efforts like those of ZenFi , which is building dense dark fiber networks in New York City and New Jersey, said Ray LaChance, the company's president and CEO. (See ZenFi Brings Dense Dark Fiber to NYC.)

ZenFi's Ray LaChance (Source: Incompas)
ZenFi's Ray LaChance (Source: Incompas)

"Our number one and number two costs are franchise fees and real-estate taxes -- a huge portion of our costs go there," he commented on the panel. In an interview afterward, LaChance said pole attachments in the Big Apple can cost $400 each. "Cities say they want broadband access for everyone, but when you look at their policies, they are counterintuitive. We would like to see some kind of homogenized franchise view, with free access to public assets."

Municipal policies and permits have long been the bane of fiber deployments, but adding edge compute power only magnifies the issue by creating the need for more real estate in more distributed areas.

The time it takes to negotiate different terms with each municipality also slows deployments, added Jon DeLuca, managing director and operating partner for Digital Bridge Holdings , and former president and CEO of Wilcon Holdings, which built dark fiber for the fiber, small cell and data center markets, predominantly in California, before being acquired.

Complicating the process further is the reality that not all markets have the same value, but municipal leaders still all want the highest price, "so your last deal becomes the floor for the next one," DeLuca said.

Both he and LaChance favored some level of uniformity, but admitted it might have to be done state-by-state. If Congress isn't able to get an infrastructure bill passed (insert joke about ineffective Congress here), then the FCC's model codes might not be adopted.

Digital Bridge's Jon DeLuca (Source: Incompas)
Digital Bridge's Jon DeLuca (Source: Incompas)

These new fiber access networks will be built, however, both to cache content closer to the consumers of high-bandwidth video content that needs low-latency delivery and to support 5G front-haul, LaChance said. Municipalities that make it easier and more cost-effective will be able to reap the rewards sooner.

These new networks are being driven in part by major Internet content players, which started by connecting their data centers but are now pushing into all aspects of the network from subsea cables to metro networks, said Mike Capuano, vice president of corporate marketing for Infinera Corp. (Nasdaq: INFN) , which sells its optical gear to three of the top four Internet players.

As the wireless industry moves to a cloud radio access network, or C-RAN, and puts in more small cells to enable 5G wireless network, the need for faster construction of access networks will only intensify, the panel agreed.

— Carol Wilson, Editor-at-Large, Light Reading

Page 1 / 2   >   >>
msilbey 10/23/2017 | 7:58:26 AM
Re: A totally new take Carol- There are *huge* issues with statewide laws. Also, and fortunately in my opinion, the FCC has already said it won't force anything on cities or states with regard to the new model codes. The agency is hoping that a national infrastructure bill will provide financial incentives for municipalities or states to adopt the code, but the FCC has said it won't introduce new mandated regulations.
brooks7 10/18/2017 | 10:56:17 AM
Re: Vintage So, I get the distribution of stream distribution...making an assumption that there is a lot of content that can be distributed.  That means, essentially, OTT video.  I did the math a long time ago (over 15 years at this point) and distributed content storage is a great idea to minimize distribution bandwidth.  But I want to point out we are talking now about 100s of Gb/s, not the 1 - 10 I was working with then.

Second, latency will only really matter in the timing of real time control.  My comment is that there will be tons of problems with getting that spotty coverage and usage to be meaningfully used.  The reason is that web servers will be used to solve the vast majority of the problems 10+ years ahead of time and the milliseconds that we talk about in those worlds are fine for lots of applications.

And that is my point with this.  Yes, there is a desire to remain competitive with web based services.  The problem is that the telcos will take forever to deploy given the amount of standards work to be done and the capital/labor to deploy it.  By then, the high margin applications will already be solved.  Unless for some reason you think that the applications are going to wait for this.

To me, it is like all the talk I used to see about how 3D online gaming required massive amounts of bandwidth.  The reality is that less than 100 msec of latency is really good service and the bandwidth required is pretty small.  These things were attacked in the days of dial-up and broadband just made it better.  The same will hold here.  This is a wonderful idea.  So, if all it takes is turning COs into DCs...then well go ahead.  It is not like there isn't a NEBS standard that we all know the COs meet.  There are no examples of Data Centers for people to look at right?

And Duh!, you left off 1 more thing.  So now we are going to put compute capacity (and power it) in the network.  That means well more battery backup and all the horrible OSP things that telcos have been trying to ELIMINATE for about 30 years.  So, yeah no PCs on poles.

seven

 

 
Duh! 10/18/2017 | 10:09:27 AM
Re: Vintage Seven,

The differentiator is latency -- 4.9 μs/km. That means something in a world of 10G access/400G core. Especially since TCP isn't going away.

The question is use cases. The obvious one is CDN. I expect that operators could build a wholesale business in VMs and VS for that. There may also be a case for offloading latency sensitive but not quite real-time computation from mobile devices.

But to the main point: as I understand it, edge computing involves servers in wire centers, probably not on poles (for the foreseeable future). Pole attachments, zoning, rights-of-way, permitting, etc. are indeed a problem... just not for edge computing.
mendyk 10/18/2017 | 9:59:02 AM
Re: Vintage seven -- I wouldn't characterize edge as filling an unmet customer need as much as it is a way for telecom operators to remain competitive. Although there is also the case about zero or near-zero latency, which will become critical for IoT and autonomous systems.
brooks7 10/17/2017 | 9:26:37 PM
Re: Vintage Carol,

 

I actually strongly disagree with your closeness to the customer bit.  Yes, CSPs would like it to be so.  But the world is fast moving to Data Center/AWS/and other Network based services.  This to me is the threat to these programs.  The real question is...what customer need gets fulfilled by Edge Computing that will not be served already by a web service?

seven

 

 
degrasse 10/17/2017 | 8:48:27 PM
Re: A totally new take Interesting -- did they say what type of legislation (if any) would be preferable? Or do they find it more effective to work city by city, project by project? 

 
Carol Wilson 10/17/2017 | 5:03:03 PM
A totally new take So in an afternoon panel here in San Francisco, discussing small cells, the panelists are much less supportive of statewide bills to ease the permitting process for antennas and other wireless equipment. A bill to do this was just vetoed by California Gov. Jerry Brown because the cities claim it gave away too much.  Spokesmen from Uniti Fiber, Lightower and others say that when statewide bills pass, they actually initially undermine the trust relationships they had been building with municipalities in those states. 

 
Carol Wilson 10/17/2017 | 4:42:30 PM
Re: Vintage Maybe the "link between rights of way and edge computing is a bit disingenuous" is true right now, but that is changing. 

IT infrastructure will be deployed closer to the customer in coming years, in all kinds of specialized ways. That stuff will be sitting in some of the places that telecom gear sits now. Of course, it's going to need specialized enclosures/power/security etc. 
mendyk 10/17/2017 | 4:25:02 PM
Re: Vintage Muni policies are meant to stymie lots of things, mostly related to allowing people to do whatever they want without oversight. And yes, bureaucracy takes time to navigate. Contractors have to do this all the time, even when they are working on private property. Also, making a direct link between rights of way and edge computing is a bit disingenuous. You don't need to build a network from scratch for that.
Carol Wilson 10/17/2017 | 4:18:36 PM
Re: Vintage Hmm, maybe I'm making them seem like whiners - I think this bunch had some good points to make. If you are running a municipality and want broadband access for your entire population, making it more expensive, time-consuming and cumbersome to do that doesn't make a lot of sense. 

It's probably unrealistic to think that cities are going to start providing rights of way for free - although I think that was baked into the Google Fiber plan initially - but they can make it easier.

The panel I'm in right now includes a fellow from the San Francisco technology department and he just rattled off four to five other city departments that are engaged in approving permits for small cells. The process takes an average of 76 days and that's considered speedy. 
Page 1 / 2   >   >>
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE