Network Engineering Stack Exchange is a question and answer site for network engineers. Join them; it only takes a minute:

Sign up
Here's how it works:
  1. Anybody can ask a question
  2. Anybody can answer
  3. The best answers are voted up and rise to the top

As the title explains, does having a longer Ethernet cable slow your connection down?

share|improve this question
    
The maximum length for a cable segment is 100 m per TIA/EIA 568-5-A. If longer runs are required, the use of active hardware such as a repeater or switch is necessary. So having longer cable (<100 m) won't affect your connection Wikipedia – Babu 23 hours ago
2  
Signal propagation time through cable is not significant. The real problem is package loss, and the stated max limit. The max rtt of 100mbps ethernet leaves us around 250m cable, which is just over 100m back and forth, and some time for the nic's to do some processing. – Filip Haglund 7 hours ago
    
I'm aware this is non-standard, but we have 110+ meter long Ethernet channels that work reliably without any rx errors. – Max Ried 6 hours ago
1  
@peterh: That is a very optimistic estimate. If you assume 16,000 kilometers distance (which is certainly too little) and account for the approx. 30% increase due to the photons travelling zig-zag inside the cable (see physics.stackexchange.com/questions/80043/…), plus consider that c is only 2/3 of what it is in vacuum, you have 105ms one-way. Thus, upwards of 200ms, no routers. Now, the Univ. of Melbourne pings to an astonishing 166ms RTT for me (via 19 hops), but it turns out it's hosted in the Amazon cloud on US west coast... :-) – Damon 3 hours ago
1  
@Damon :-) Yes. But consider the packets should also go back. Australia is nearly on exactly opposite side of the Earth as Europe, so I think we can calculate with 2*20000km. With +30% zig-zag it is 52000km, with 2/3c it comes to around 250ms ping reply time. – peterh 3 hours ago

No, it will not slow down a connection but you need to be aware of the maximum length of a copper connection which is 100 Meters. This needs to include the length of your patching cable from the host to the data point and also patch frame to the switch.

However when using CAT6 with a 10Gb interface, you can only use up to 55 Meters and would need to use CAT6A to achieve 100 Meters for this type of transmission.

So if you are going above the specified maximum cable length, you will start to see problems not just relating to speed but also to loss due to the nature of electrical current running through the cable.

The 100 Meters only applies to a single run without any intermediary network device such as switch. If you have a switch in between, you can obviously extend this from port to port which the maximum would apply to for each cable run from device to device.

Using Fibre connectivity, you can extend the range based on what type of fibre and light which is beyond the scope of your question.

share|improve this answer
    
Ah, I see, thanks a lot! – SidS yesterday
11  
Note that, due to the nature of TCP, data loss (e.g., from overlength wire) can cause a perceived slow down because the connection has to wait for lost or bad packets to be retransmitted. – Chris Bouchard 18 hours ago
    
So just an FYI... electrons in a wire do take time to travel a distance. Electrons in a cat5e wire move at a speed of 0.64 * speed-of-light. So assuming a cable length of 100m the time it takes an electron to move that distance is: approximately 521 nanoseconds. Or time = distance / speed = 100 meters / (0.64 * 3e8 meters-per-second). – Trevor Boyd Smith 8 mins ago

For all practical purposes, there will be no effect on the speed of your connection.

There will be a very insignificant amount of delay due to long cables. This wont affect the max speed of your connection, but would cause some latency. pjc50 points out that it's about a nanosecond for every foot of cable length, which is a good rule of thumb used by many engineers when developing systems which are very dependent on latencies on those timescales.

In reality, you will never notice a difference. A "fast" ping time on the internet is 10ms, which is 10,000,000ns. Adding even a few hundred feet of cable isn't going to have a noticeable effect at that point. In fact, nearly every step of the way involves delays which are more extreme than those seen from signal propagation. For example, most consumer grade routers will wait for the last byte of an incoming packet to be received and check it for errors before sending the first byte of the packet on its way. This delay will be on the order of 5,000ns! Given that the maximum length of cable you can run (per the Ethernet spec) is 300ft, the cable length could never cause more than 300ns of delay due to cable!

share|improve this answer
    
It's not the propagation delay which is a problem, but packet loss at very long cables. The speed will theoretically be the same, but the "perceived" speed can become much lower as packets are lost and have to be resent. – vsz 5 hours ago

Sort of, to a very tiny extent.

The longer your cable, the higher latency you experience - gamers call this "ping" time. However, the effect is about one nanosecond per foot of cable, which is unlikely to be noticeable in most cases. Especially as a single ethernet cable is limited to 100m.

This matters for high-frequency trading and occasionally for email.

It doesn't, of itself, affect the throughput or "bandwidth" of your cable.

share|improve this answer
1  
+1 For the speed of light story. That made my day. With some protocols (like SMB, latency will affect throughput, as I learnt the other day)... – Aron 20 hours ago

I believe it can, but not in the way most people are thinking about.

Most are thinking of the extra propagation delay through the cable itself. This is valid, but as people have already pointed out, so small that it's essentially always irrelevant.

There is another possibility though. Ethernet cables come in a few different varieties: cat 5, cat 5e and cat 6 are currently in (reasonably) wide use. Cat 5 doesn't officially support gigabit Ethernet, but with a short (e.g., 1 or 2 meter) cat 5 cable that's in good physical condition, you can often get a seemingly reliable gigabit connection anyway1.

With a longer cable, however, you could get enough signal deterioration that a gigabit connection was no longer possible. In this case, I believe you'd normally be a 100 megabit connection instead. In this case, you wouldn't just gain some irrelevant amount of latency--rather, you'd have lost a substantial amount of bandwidth.

This wouldn't have any effect on an Internet connection unless you happen to be one of the fortunate few with more than 100 MB/s bandwidth. Access to local resources could be affected much more drastically though.


  1. All of these use identical-looking RJ-45 connectors; the difference between cat 5 and cat 5e cable usually isn't obvious except by looking at the printing on the wiring to see which it says.
share|improve this answer
    
Yes. All other answers are theoretical but I've seen THIS happening in real-life. Even if you can get gigabit/100Mbit detected the connection will slow due to retransmissions because of errors due to signal deterioration – slebetman 7 hours ago

The standard is 100m (~333.33 ft; 1m = 3 1/3 ft) before attenuation makes the signal unusable, but the direct answer to your question is yes, a long cable can slow your connection. Attenuation is caused by the internal resistance of the copper which humans perceive as lag/slow down of network connectivity. If the cable is under 100m, the slow down is relatively unnoticeable. It can cause issues if you're coming close to that 100m mark though. And keep in mind that the 100m length is measured from the point the cable plugs into the port on your computer to the point it plugs into a device that regenerates the signal, like a switch or a router. (I've personally had to change out a cable to a printer because the ~97m length caused sporadic communication.)

share|improve this answer
    
The standard has nothing to do with signal attenuation. The original reason was due to CSMA/CD, which is completely irrelevant in modern Ethernet installations. Today we almost exclusively use switches on Fast Ethernet installations, and GBe doesn't even HAVE CSMA/CD. – Aron 6 hours ago

The electrical signal propagation time for a 100m max length ethernet cable is only about half a microsecond. This is far less than the amount of time needed for your router/etc to do their jobs.

This only begins to be relevant when looking at much larger distances: eg From your computer to the server for a game you're playing; however that number is entirely in the hands of your ISP/its partners and the physical locations of you and the server itself.

share|improve this answer

The electric signal will be slowed down by a minimal amount (afterall it travels almost at light speed), how much time does light take to travel for 100 meters?

timeTaken  = 100/299792458 = 0,00000033 seconds

So it just takes an extra 0,0003 milliseconds which is just 300 CPU cycles (on a 1 Ghz CPU). However the longer the cable the weaker the signal becomes, once the signal is weak enough it will starts to lose bits of information because of interferences, each time a bit is loss, something in the network layer sees that a checksum/parity check fails, and ask for that packet again.

Asking for a new packet will take a very long time.

So as long as signal is strong in the cable, the slowdown would be minimal (it is greater than I expected anyway).

Once you start losing information because cable too long, the slowdown would greatly increase.

Also note that certain communications protocols are timed, so if the cable is too long it may not even be usable because it would go out of sync (that's a by-design issue)

share|improve this answer
1  
Good! You calculated! :-) Thus, it will be a packet loss. – peterh 16 hours ago
2  
Note that the velocity factor of a CAT5 or similar cable is not 1. Simply dividing by the speed of light does not apply for most electrical media. – user2943160 11 hours ago
    
does not apply but provide an approxximate lower bound, so instead of 0,0003 milliseconds, the time is increased by "something more". It is not an exact computation of course, but gives an estimate – DarioOO 4 hours ago

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.