TCP Performance in Wireless multi-hop Networks: Difference between revisions

From
Jump to navigation Jump to search
 
(39 intermediate revisions by 2 users not shown)
Line 30: Line 30:


<li>All results based on a network configuration consisting of TCP-Reno over IP on an [[WLAN|802.11 wireless network]], with routing provided by the [[RoutingProtocols|Dynamic Source Routing]] (DSR) protocol and BSDs ARP protocol (used to resolve IP adresses to MAC adresses)
<li>All results based on a network configuration consisting of TCP-Reno over IP on an [[WLAN|802.11 wireless network]], with routing provided by the [[RoutingProtocols|Dynamic Source Routing]] (DSR) protocol and BSDs ARP protocol (used to resolve IP adresses to MAC adresses)

<li>Objective was to observe TCPs performance in the presence of mobility inducted failures in a plausible network environment, for which any of the proposed mobile wireless ad hoc routing protocols would have sufficed

<li>Network model consists of 30 nodes in 1500x300 meter flat, rectangular area

<li>Nodes move according to random waypoint mobility model

<li>In random waypoint model each node x picks a random destination and speed in the area and travels to destination in straight line

<li>Once x arrives, it pauses, picks another destination and goes on

<li>No pause, so every node is always in moving

<li>All nodes communicate with identical half duplex wireless radios, which are modeld after 802.11 based Wave Lan wireless radios, with a bandwith of 2Mbps and nominal transmission radius of 250m

<li>TCP packet size was 1460bytes, maximum window was eight packets

<li>All simulation results based on average throughput of 50 scenarios or patterns

<li>Each pattern, generated randomly, designates the initial placement and heading of each of the nodes over simulated time

<li>Used same pattern for different mean speeds

<li>For a given pattern at different speeds, same sequence of movements (and link failures) occur

<li>Speed of each node is uniformly distributed in an interval of 0,9v - 1,1v for some mean speed v


</td>
</td>
</table>
</table>


== Performance Metric ==
== Performance Metric ==
<table>
<tr><td>&nbsp;&nbsp;</td>
<td>
<li>Throughput as performance metric used

<li>TCP throughput ussually less than optimal due to TCP senders inability to accurate determine the cause of a packet loss

<li>TCP sender assumes that all packets losses are caused by congestion

<li>When link on TCP route breaks, TCP sender reacts as if congestion was the cause, reducing its congestion window and, in instance of a timeout, backing-off its retransmission timeout (RTO)

<li>Therefore, route changes due to host mobility can detrimental impact on TCP performance

<li>To gauge impact of route changes on TCP perfomance, we derived an upper bound on TCP throughput, the expected throughput

<li>TCP throughput measure obtained by simulation is then compared with expected throughput

<li>''Expected throughput was obtained as the following:''<br>
- First simulated a static (fixed) network of n nodes that formed a linear chain containing n-1 wireless hops<br>
- Nodes used 802.11 protocol for medium access<br>
- Then a one-way TCP data transfer was performed between the two nodes at the ends of the linear chain, and the TCP throughput was measured between these nodes

</td>
</tr>
</table>
<table width=300 align=center border=1>
<tr><td><b><i>Hops</i></b></td><td width=120><b><i>Throughput (Kbps)</i></b></td><td rowspan=11>
Table 1 shows measured TCP throughput as a function of number of hops, averaged over ten runs<br>

Throughput decreases rapidly when number of hops is increased from 1, then stabilizes once the number of hops becomes large
</td></tr>
<tr><td>1</td><td>1463.0</td></tr>
<tr><td>2</td><td>729.0</td></tr>
<tr><td>3</td><td>484.4</td></tr>
<tr><td>4</td><td>339.9</td></tr>
<tr><td>5</td><td>246.4</td></tr>
<tr><td>6</td><td>205.2</td></tr>
<tr><td>7</td><td>198.1</td></tr>
<tr><td>8</td><td>191.8</td></tr>
<tr><td>9</td><td>185.3</td></tr>
<tr><td>10</td><td>182.4</td></tr>
</table>

<table>
<tr><td>&nbsp;&nbsp;</td>
<td>
<li>Our objective here is to use these measurements to determine expected troughput

<li>Expected throughput is a function of mobility pattern

<li>For instance, if two nodes are always adjacent and move together, the expected thoughput for the TCP connection between them would be identical to that for one hop in Table 1
</td></tr>
</table>

<div align=center>
<b>expected throughput</b> = <math>\frac{\sum_{i=1}^{\infty}T_i * t_i}{\sum_{i=1}^{\infty}t_i}</math>
</div>

== Measurement of TCP-Reno Throughput ==
== Measurement of TCP-Reno Throughput ==

<table>
<tr><td>&nbsp;&nbsp;</td>
<td>

<li>TCP throughput should monotonically degrade as speed increase
<li>Throughput drops sharply as mean speed increased from 2 m/s to 10 m/s
<li>When mean speed increase from 10 m/s to 20 m/s and 30 m/s th averaged over 50 runs decreases only sligthly
<li>Counter intuitive result: In fact throughput could have potentially increased with speed

<li>Following observations are to be seen: although, for any given speed, the points may be located near or far from the diagonal line (expected throughput), when speed is increased, the points tend to move away from the diagonal, signifying a degradation in th

<li>For given speed, certain mobility patterns achieve th close to 0, although other mobility patterns (with the same mean speed) are able to achieve a higher throughput

<li>Even at high speeds, some mobility patterns result in high throughput that is close to the expected throughput

<li>If two nodes move together, the link between them will not break, regardless of their speed

</td></tr></table>

== Mobility Induced Behaviours ==
== Mobility Induced Behaviours ==
<table><tr><td>&nbsp;&nbsp;</td>
== TCP Performance Using Explicit Feedback ==
<td>
<li>Observing examples of mobility induced behaviours

<li>There are several possible explanations (due to the variety of used protocols, such as 802.11 MAC, ARP, DSR and TCP on top of them)
<li>Throughput: Function of data acknowledged to the sender
<li>The following scenario results in almost zero throughput (due to route failures of some TCP packets)



</td></tr>
</table>
<table border=1 align=center>
<tr><td><i><b>Event</b></i></td><td><i><b>Time (secs)</b></i></td>
<td><i><b>Node</b></i></td><td><i><b>SeqNo</b></i></td>
<td><i><b>Pkt</b></i></td><td><i><b>Reason of dropping</b></i></td></tr>
<tr><td>s</td><td>0.000</td><td>1</td><td>1</td><td>tcp</td><td></td></tr>
<tr><td>D</td><td>0.191</td><td>5</td><td>1</td><td>tcp</td><td>NRTE</td></tr>
<tr><td>s</td><td>6.000</td><td>1</td><td>1</td><td>tcp</td><td></td></tr>
<tr><td>r</td><td>6.045</td><td>2</td><td>1</td><td>tcp</td><td></td></tr>
<tr><td>s</td><td>6.145</td><td>2</td><td>1</td><td>ack</td><td></td></tr>
<tr><td>D</td><td>6.216</td><td>21</td><td>1</td><td>ack</td><td>NRTE</td></tr>
<tr><td>s</td><td>18.000</td><td>1</td><td>1</td><td>tcp</td><td></td></tr>
<tr><td>s</td><td>42.000</td><td>1</td><td>1</td><td>tcp</td><td></td></tr>
<tr><td>s</td><td>90.000</td><td>1</td><td>1</td><td>tcp</td><td></td></tr>
<tr><td>D</td><td>120.000</td><td>15</td><td>1</td><td>tcp</td><td>END</td></tr>
<tr><td>D</td><td>120.000</td><td>16</td><td>1</td><td>tcp</td><td>END</td></tr>
<tr><td>D</td><td>120.000</td><td>25</td><td>1</td><td>tcp</td><td>END</td></tr>
</table>
s – send, r – receive, D – dropped, NRTE – no route found

<font size=+1><i>First conclusion:</i></font>
<table><tr><td>&nbsp;&nbsp;</td>
<td>
<li>Clear: Characteristics of the routing protocol have an eminent impact on TCP Performance

<li>Biggest problem: Caching and propagation of stale routes

<li>TCP sender's routing protocol is unable to quickly recognize and purge stale routes

<li>This gets even more complicated, when the intermediate nodes are allowed to respond to route requests with their own stale routes in cache (amplified by overhearing propagated stale routes and spreading the wrong information around)
<li>Upon further inspection it was recognizable that the routing protocol regularly fails, when the minimum path increases in length, independent of the mean speeds
<li>In case the nodes move closer DSR can maintain the route, in case they diverge DSR does not search another route until an error occurs
<li>Thus, the TCP sender repeatedly times-out and backs-off
<li>The problem should be familiar to all reactive routing protocols

</td></tr>
</table>
<font size=+1><i>Solutions:</i></font>
<table><tr><td>&nbsp;&nbsp;</td>
<td>
<li>Using more effective cache maintenance strategies
<li>Including simple techniques like dynamically adjusting the route cache timeout mechanism (depending on the observed route failure rate)
<li>The use of negative route information
<li>The use of signal strength information
<li>First improve routing protocols, then look at TCP
</td></tr>
</table>

== Explicit Feedback ==
Explicit Feedback is a technique for signaling congestion, corruption due to wireless transmission errors and link failures due to mobility.<br>

Here we take a brief look at <i>Explicit Link Failure Notification</i> - <b>ELFN</b>.
<table>
<tr><td>&nbsp;&nbsp;</td>
<td>
<li>Objective: Provide the TCP sender with information about link and route failures so that it can avoid responding to the failures as if congestion occurred

<li>Different ways in implementing the ELFN message:<br>
- Very simple one: a "host unreachable" ICMP message as a notice to the sender<br>
- Another way: a message piggy-backed on the message, which is already sent from the routing protocol

<li>The approach in this case:<br>
The DSR route failure message carries parts of the TCP/IP headers of the packet (by which the notice was instigated), including sender and receiver addresses (to identify the connection), ports and the TCP sequence number<br><br>
<b>Functionality:</b><br>
1. TCP sender receives an ELFN<br>
2. Disables its retransmission timers, enters stand-by mode<br>
3. On stand-by a packet is sent periodically to probe the network if there is a new route<br>
4. On receiving an ack it leaves stand-by, restores timers and continues as normal (here used packet probing instead of sending a "route established" message)
</td></tr>
</table>
<b>Result:</b>The use of ELFN improved the throughput for each of the speeds (closer proximity to the expected throughput line, also tighter clustering of the different moving patterns shows an improvement too).

== Split-TCP ==
== Split-TCP ==
<table><tr><td>&nbsp;&nbsp;</td>
<td>
<li>Developed scheme by Kopparty, Krishnamurthy, Faloutsos, Tripathi at the University of California, Riverside, Riverside

<li>Split-TCP (also "TCP with proxies") separates the functionalities of TCP congestion control and reliable packet delivery

<li>For any TCP connection certain nodes along the route take up the role of being proxies (they buffer packets upon receipt)

<li>By introducing proxies shorter TCP connections are emulated == a better parallelism in the network is achieved, "unfair" advantages of short connections are minimized

<li>Long connections are much more likely to freeze because they have more links and since short connections can transmit faster they can dominate shared links

<li>The 802.11 MAC Protocol centuates the problems:<br>"Channel Capture Effect" (the first connection captures the channel until it has transmitted all its data)

<li>Examining the effect of splitting long TCP connections into shorter localized segments ("zones")

<li>Using proxies as interfacing agents between these zones

<li>A proxy intercepts packets, buffers them, acknowledges their receipt to the source (or the previous proxy) by sending a local acknowledgement (LACK) and takes over the responsibility of delivering the packets further

<li>Upon the receipt of a LACK (from the next proxy or final destination) a proxy will purge the packet from its buffer

<li>Hereby the end-to-end acknowledgement of TCP is not changed (since the overhead of ACKS and LACKS is so small that it is said to be acceptable)

<li><b>Important point:</b> Congestion seems to be a local phenomenon, whereas reliability is an end-to-end requirement  splitting the transport layer functionalities into these both

<li>The source sends at a rate proportional to the rate of arrival of LACKS from the next proxy, the proxies themselves do so too
<li>But the source only purges a packet from buffer on receipt of an ACK
<li>Correspondingly the transmission window is split into two windows: the congestion window and the end-to-end window, where the congestion window would always be a sub-window of the end-to-end window
<li>At each proxy there would be a congestion window which would govern the rate of sending between proxies<br><br>
</td></tr>
</table>
<font size=+1><b>Overall Result:</b></font>
<table><tr><td>&nbsp;&nbsp;</td>
<td>
<li>Split-TCP is able to deal better with problems of mobility – if one “zone” fails because of a broken link, then it is possible to sustain data transfer on other local segments (where a normal TCP session can be choked)
<li>Thus, Split-TCP takes advantage of the links that are up!
<li>Furthermore Split-TCP does improve the fairness between TCP sessions in a network, because now all sessions are of a short length
<li>The channel capture effect is less detrimental, because only small regions are captured, not a whole connection
<li>The total throughput improves with the use of proxies by about 5 to 30%
<li>The unfairness decreases from 0.8 to 0.2 (1.0 being the maximum unfairness)

</td></tr>

</table>

== Conclusion ==
== Conclusion ==
TCP drops significantly when node movement causes link failures, because of TCP's inability to recognize the difference between link failure and congestion.

The use of ELFN can improve the TCP performance significantly.

There is a bad interaction between TCP and ARP (BSD-implementation, one-packet queue, no request time-out mechanism), because ARP drops packets regularly or holds them indefinitely while awaiting resolution, so a more advanced ARP needs to be employed.

As mentioned the route cache is another big problem, more aggressive cache management protocols are needed therefore.

Split-TCP can be seen as good step forward to improve wireless connections.

== References ==
== References ==
<i>
[1] G. Holland and N. Vaidya, “Analysis of TCP Performance Over Mobile Ad-Hoc Networks”, IEEE/ACM MOBICOM, 1999

[2] S. Kopparty, S. Krishnamurthy, M. Faloutsos and S. Tripathi, “Split-TCP for Mobile Ad-Hoc Networks”, 2002

[3] H. Balakrishnan, V.N. Padmanabhan, S. Seshan and R.H. Katz, “A Comparison of Mechanisms for improving TCP Performance over wireless links”, IEEE/ACM Transactions on Networking, 1997

[4] M. Gerla, K. Tang and R. Bagrodia, “TCP Performance in wireless multi-hop networks”, 1998

[5] http://www.google.de
</i>

Latest revision as of 13:02, 3 February 2005

Introduction

  
  • Early research showed TCP suffers poor Performance in wireless networks because of packet losses and corruption caused by wireless inducted errors
  • Further studies searched for mechanism to improve TCP performance in cellular wireless systems
  • Other researches investigated other network problems that negativly affect TCP performance, such as bandwidth asymmetry and large round trip times, which are prevalent in satelite networks
  • During the presentaition we adress another network charackteristic that impacts TCP performance, which is common in mobile ad hoc networks: link failures due to mobility
  • First present performance analysis of standart TCP over mobile ad hoc networks
  • Then present analysis of the use of explicit notification techniques to counter the affects of link failures
  • Simulation Environment and Methodology

      
  • For simulations the ns network simulator from Lawrence Berkles National Laboratory was used, with extensions from the MONARCH project at Carnegie Mellon
  • Extensions include a set of mobile ad hoc network routing protocols, an implementation of BSDs ARP protocol, an 802.11 MAC Layer, a radio propagation model and mechanisms to model node mobility using pre-computed mobility patterns that are fed to the simulation at run time
  • No modifications were made to the simulator (accept minor bug fixes that were necessary)
  • All results based on a network configuration consisting of TCP-Reno over IP on an 802.11 wireless network, with routing provided by the Dynamic Source Routing (DSR) protocol and BSDs ARP protocol (used to resolve IP adresses to MAC adresses)
  • Objective was to observe TCPs performance in the presence of mobility inducted failures in a plausible network environment, for which any of the proposed mobile wireless ad hoc routing protocols would have sufficed
  • Network model consists of 30 nodes in 1500x300 meter flat, rectangular area
  • Nodes move according to random waypoint mobility model
  • In random waypoint model each node x picks a random destination and speed in the area and travels to destination in straight line
  • Once x arrives, it pauses, picks another destination and goes on
  • No pause, so every node is always in moving
  • All nodes communicate with identical half duplex wireless radios, which are modeld after 802.11 based Wave Lan wireless radios, with a bandwith of 2Mbps and nominal transmission radius of 250m
  • TCP packet size was 1460bytes, maximum window was eight packets
  • All simulation results based on average throughput of 50 scenarios or patterns
  • Each pattern, generated randomly, designates the initial placement and heading of each of the nodes over simulated time
  • Used same pattern for different mean speeds
  • For a given pattern at different speeds, same sequence of movements (and link failures) occur
  • Speed of each node is uniformly distributed in an interval of 0,9v - 1,1v for some mean speed v
  • Performance Metric

      
  • Throughput as performance metric used
  • TCP throughput ussually less than optimal due to TCP senders inability to accurate determine the cause of a packet loss
  • TCP sender assumes that all packets losses are caused by congestion
  • When link on TCP route breaks, TCP sender reacts as if congestion was the cause, reducing its congestion window and, in instance of a timeout, backing-off its retransmission timeout (RTO)
  • Therefore, route changes due to host mobility can detrimental impact on TCP performance
  • To gauge impact of route changes on TCP perfomance, we derived an upper bound on TCP throughput, the expected throughput
  • TCP throughput measure obtained by simulation is then compared with expected throughput
  • Expected throughput was obtained as the following:
    - First simulated a static (fixed) network of n nodes that formed a linear chain containing n-1 wireless hops
    - Nodes used 802.11 protocol for medium access
    - Then a one-way TCP data transfer was performed between the two nodes at the ends of the linear chain, and the TCP throughput was measured between these nodes
  • HopsThroughput (Kbps)

    Table 1 shows measured TCP throughput as a function of number of hops, averaged over ten runs

    Throughput decreases rapidly when number of hops is increased from 1, then stabilizes once the number of hops becomes large

    11463.0
    2729.0
    3484.4
    4339.9
    5246.4
    6205.2
    7198.1
    8191.8
    9185.3
    10182.4
      
  • Our objective here is to use these measurements to determine expected troughput
  • Expected throughput is a function of mobility pattern
  • For instance, if two nodes are always adjacent and move together, the expected thoughput for the TCP connection between them would be identical to that for one hop in Table 1
  • expected throughput =

    Measurement of TCP-Reno Throughput

      
  • TCP throughput should monotonically degrade as speed increase
  • Throughput drops sharply as mean speed increased from 2 m/s to 10 m/s
  • When mean speed increase from 10 m/s to 20 m/s and 30 m/s th averaged over 50 runs decreases only sligthly
  • Counter intuitive result: In fact throughput could have potentially increased with speed
  • Following observations are to be seen: although, for any given speed, the points may be located near or far from the diagonal line (expected throughput), when speed is increased, the points tend to move away from the diagonal, signifying a degradation in th
  • For given speed, certain mobility patterns achieve th close to 0, although other mobility patterns (with the same mean speed) are able to achieve a higher throughput
  • Even at high speeds, some mobility patterns result in high throughput that is close to the expected throughput
  • If two nodes move together, the link between them will not break, regardless of their speed
  • Mobility Induced Behaviours

      
  • Observing examples of mobility induced behaviours
  • There are several possible explanations (due to the variety of used protocols, such as 802.11 MAC, ARP, DSR and TCP on top of them)
  • Throughput: Function of data acknowledged to the sender
  • The following scenario results in almost zero throughput (due to route failures of some TCP packets)
  • EventTime (secs) NodeSeqNo PktReason of dropping
    s0.00011tcp
    D0.19151tcpNRTE
    s6.00011tcp
    r6.04521tcp
    s6.14521ack
    D6.216211ackNRTE
    s18.00011tcp
    s42.00011tcp
    s90.00011tcp
    D120.000151tcpEND
    D120.000161tcpEND
    D120.000251tcpEND

    s – send, r – receive, D – dropped, NRTE – no route found

    First conclusion:

      
  • Clear: Characteristics of the routing protocol have an eminent impact on TCP Performance
  • Biggest problem: Caching and propagation of stale routes
  • TCP sender's routing protocol is unable to quickly recognize and purge stale routes
  • This gets even more complicated, when the intermediate nodes are allowed to respond to route requests with their own stale routes in cache (amplified by overhearing propagated stale routes and spreading the wrong information around)
  • Upon further inspection it was recognizable that the routing protocol regularly fails, when the minimum path increases in length, independent of the mean speeds
  • In case the nodes move closer DSR can maintain the route, in case they diverge DSR does not search another route until an error occurs
  • Thus, the TCP sender repeatedly times-out and backs-off
  • The problem should be familiar to all reactive routing protocols
  • Solutions:

      
  • Using more effective cache maintenance strategies
  • Including simple techniques like dynamically adjusting the route cache timeout mechanism (depending on the observed route failure rate)
  • The use of negative route information
  • The use of signal strength information
  • First improve routing protocols, then look at TCP
  • Explicit Feedback

    Explicit Feedback is a technique for signaling congestion, corruption due to wireless transmission errors and link failures due to mobility.

    Here we take a brief look at Explicit Link Failure Notification - ELFN.

      
  • Objective: Provide the TCP sender with information about link and route failures so that it can avoid responding to the failures as if congestion occurred
  • Different ways in implementing the ELFN message:
    - Very simple one: a "host unreachable" ICMP message as a notice to the sender
    - Another way: a message piggy-backed on the message, which is already sent from the routing protocol
  • The approach in this case:
    The DSR route failure message carries parts of the TCP/IP headers of the packet (by which the notice was instigated), including sender and receiver addresses (to identify the connection), ports and the TCP sequence number

    Functionality:
    1. TCP sender receives an ELFN
    2. Disables its retransmission timers, enters stand-by mode
    3. On stand-by a packet is sent periodically to probe the network if there is a new route
    4. On receiving an ack it leaves stand-by, restores timers and continues as normal (here used packet probing instead of sending a "route established" message)
  • Result:The use of ELFN improved the throughput for each of the speeds (closer proximity to the expected throughput line, also tighter clustering of the different moving patterns shows an improvement too).

    Split-TCP

      
  • Developed scheme by Kopparty, Krishnamurthy, Faloutsos, Tripathi at the University of California, Riverside, Riverside
  • Split-TCP (also "TCP with proxies") separates the functionalities of TCP congestion control and reliable packet delivery
  • For any TCP connection certain nodes along the route take up the role of being proxies (they buffer packets upon receipt)
  • By introducing proxies shorter TCP connections are emulated == a better parallelism in the network is achieved, "unfair" advantages of short connections are minimized
  • Long connections are much more likely to freeze because they have more links and since short connections can transmit faster they can dominate shared links
  • The 802.11 MAC Protocol centuates the problems:
    "Channel Capture Effect" (the first connection captures the channel until it has transmitted all its data)
  • Examining the effect of splitting long TCP connections into shorter localized segments ("zones")
  • Using proxies as interfacing agents between these zones
  • A proxy intercepts packets, buffers them, acknowledges their receipt to the source (or the previous proxy) by sending a local acknowledgement (LACK) and takes over the responsibility of delivering the packets further
  • Upon the receipt of a LACK (from the next proxy or final destination) a proxy will purge the packet from its buffer
  • Hereby the end-to-end acknowledgement of TCP is not changed (since the overhead of ACKS and LACKS is so small that it is said to be acceptable)
  • Important point: Congestion seems to be a local phenomenon, whereas reliability is an end-to-end requirement  splitting the transport layer functionalities into these both
  • The source sends at a rate proportional to the rate of arrival of LACKS from the next proxy, the proxies themselves do so too
  • But the source only purges a packet from buffer on receipt of an ACK
  • Correspondingly the transmission window is split into two windows: the congestion window and the end-to-end window, where the congestion window would always be a sub-window of the end-to-end window
  • At each proxy there would be a congestion window which would govern the rate of sending between proxies

  • Overall Result:

      
  • Split-TCP is able to deal better with problems of mobility – if one “zone” fails because of a broken link, then it is possible to sustain data transfer on other local segments (where a normal TCP session can be choked)
  • Thus, Split-TCP takes advantage of the links that are up!
  • Furthermore Split-TCP does improve the fairness between TCP sessions in a network, because now all sessions are of a short length
  • The channel capture effect is less detrimental, because only small regions are captured, not a whole connection
  • The total throughput improves with the use of proxies by about 5 to 30%
  • The unfairness decreases from 0.8 to 0.2 (1.0 being the maximum unfairness)
  • Conclusion

    TCP drops significantly when node movement causes link failures, because of TCP's inability to recognize the difference between link failure and congestion.

    The use of ELFN can improve the TCP performance significantly.

    There is a bad interaction between TCP and ARP (BSD-implementation, one-packet queue, no request time-out mechanism), because ARP drops packets regularly or holds them indefinitely while awaiting resolution, so a more advanced ARP needs to be employed.

    As mentioned the route cache is another big problem, more aggressive cache management protocols are needed therefore.

    Split-TCP can be seen as good step forward to improve wireless connections.

    References

    [1] G. Holland and N. Vaidya, “Analysis of TCP Performance Over Mobile Ad-Hoc Networks”, IEEE/ACM MOBICOM, 1999

    [2] S. Kopparty, S. Krishnamurthy, M. Faloutsos and S. Tripathi, “Split-TCP for Mobile Ad-Hoc Networks”, 2002

    [3] H. Balakrishnan, V.N. Padmanabhan, S. Seshan and R.H. Katz, “A Comparison of Mechanisms for improving TCP Performance over wireless links”, IEEE/ACM Transactions on Networking, 1997

    [4] M. Gerla, K. Tang and R. Bagrodia, “TCP Performance in wireless multi-hop networks”, 1998

    [5] http://www.google.de