This post was contributed by a community member. The views expressed here are the author's own.

Health & Fitness

Lit San Leandro - low latency means true bandwidth delivery

#3 on Lit San Leandro

We all know that everyone and their brother these days sells an internet connection.   The marketing hype sells us on two things: 

1)  Price
2) "Speed"  

I have already covered part of the (the other is the old school frame relay/virtual circuit "T1/T3" systems cost 10x that of the newer metro-Ethernet connections, so systems like Lit San Leandro are a better bandwidth value by default these days).

Speed, as it pertains to networking, is an awful nebulous term (at least to an engineer/math geek's ears) unless you understand the components of "speed" when it comes to bandwidth.     Speed has to be broken down into two definable sub-parts - rate and latency.   

The rate is simply the number in megabits per seconds (or gigabits per second! with Lit SL).    The rate is a theoretical maximum of the physical connections [the underlying electrical signaling (for copper lines) or optical pulses (for fiber optics)].    Now realize there is always a gap between theory and reality.   If someone says they are delivery "X" megabits per second, what they really mean is "X minus overhead and latency".  Overhead is the routing information that tells where the data to go and how to send back a reply, basically street information like a house address for networked computers.     If you buy any line, it has a small overhead - again, think of the overhead like the envelope and the data like the letter in the envelope.   

Overhead is minimal but it is there.   On a xDSL 6Mbps line, overhead means your absolute data rate (the letters in the envelope) is about 5.9~Mbps because you need the space to also send the envelope.  But it is not overhead which really kills your actual absolute data rate, that suspect is latency.

For computers to talk to each other, the information is broken down into data packets (the envelope with the letter wrapped neatly inside).   How quickly each one of the data packets can be sent or received compared to the next data packet is called latency.   

Think of an old fashion "bucket brigade" - how much water actually gets throw on the fire depends how fast each person in line can pass the 5 gallon bucket (in networking that is the send) then get the buckets back to get refilled (that's called the acknowledgement - us geeks just say "ack").

Latency is the undertow of networking.   The higher the latency, the less actual data packets can successfully cross the network over a given time period.  Again, think about the bucket brigade, one slow person slows the entire chain of events down.

Any shared topology network, like xDSL and cable, suffer horribly from latency.   Traditional dedicated circuits were truly dedicated, meaning, one customer circuit went from the customer's premise to the large backbone connection at the phone company's central office (CO as it is called).    xDSL and cable networks are shared networks.     Cable is even more shared than xDSL.     If the network is shared, before the large backbone connection, you are competing for time on the line with all of your neighbors -or- all of your neighbors from several entire neighborhoods.

I could put my performance engineering math hat on, but here is an easy analogy instead:   You are at Disneyland with your better half and the typical statistical average of 2.6 children.    You are the only family in line at Space Mountain - you, your other half and the kids line up in a straight line and move quickly to the ride.   That's a dedicated circuit, it is you going to the end point in a dedicated channel without having to worry or account for other traffic.

xDSL and cable would be you are standing in line at Space Mountain with 500 other families (some with 15 kids, not 2.6) and your family is spread out in line with a thousand of people.   How hard is it to get your family to the ride now?!?  

Find out what's happening in San Leandrowith free, real-time updates from Patch.

Shared networks introduce a lot of latency to the individual user because from millisecond to millisecond, one user's data packets while have to be delayed a few cycles while other data for another user is sent or received.  The more users on at the same time, the more delay that get impacted to every connection.    In the early days of cable deployments, cable provides massively oversubscribed cable segments, putting thousands of homes on a single line, and the actual data rate per home fell to nearly nothing.   Latency (and other factors) eventually can cause congestion and even a network failure, sort of like how a bad diet causes arterial disease which can lead to a heart attack.     Yes, computer networks can get "sick" too and we have more than our share of "sickly" networks selling bandwidth to unsuspecting consumers.

Without getting too technical, there is also a core latency difference between electromagnetically signaled systems (copper lines, i.e. phone lines) and optically signaled systems (fiber optic delivery).    Almost all backbones today (the big carriers sending large data from coast-to-coast and beyond) are fiber optic transmission systems - light waves over glass cables.    There is an inherit problem with latency in the method a lot services hand off the circuit at the customer's premise - they switch over to copper at the door step.    Switching a signal from optical signalling on fiber to electromagnetic signalling over copper introduces a hefty latency delay.   The old transceivers and fiber-to-copper bridges could introduce 500ms latency in both directions, then new ones are better but still introduce additional latency.     A to-the-door fiber optic connection de facto removes that hand off latency issue.

Removing the "door step" latency is critical because the further over the Internet (in actual cable-miles) you have to send or receive data, the longer each packet will take.   Even though data packet move about 2/3s the speed of light (.667c), every additional mile of cable traversed adds to latency or "trip time" as we sometimes call it.    So if you eliminate the worst part of latency at the "door step" of your business, long distance (including overseas) transmission of data can become increasingly efficient.

Latency is inversely impacts bandwidth.   The lower the latency, the higher actual utilization of the circuit.   The higher the latency, the less data actually makes to the destination over a given amount of time.

You can order a gazillion bit per second internet connection, but if it suffers from high latency, your gazillion bits of bandwidth plummets towards unusable very quickly.   Lit San Leandro knows this and is specifically delivering extremely low latency which means high quality, highly efficient, (and another specialty of mine, performance engineering) a high performance connection (yes, I could conclusively prove to you that your current internet circuit won't perform they way you think it does and/or how under-performs at the same rate when compared to a Lit San Leandro-like connection)

The real value in bandwidth today must be reviewed in a totality of circumstance.   Price is not the only factor, nor is "speed".   Lit San Leandro breaks through the typical bandwidth marketing hype and delivers "clean" effective bandwidth at a competitive price which simply will outperform most other internet / wide area networking / metro area networking solutions available in the marketplace today.

More to follow...

We’ve removed the ability to reply as we work to make improvements. Learn more here

The views expressed in this post are the author's own. Want to post on Patch?