Results 1 to 10 of 10

Thread: Network Performance of Linux

  1. #1

    Network Performance of Linux

    I have a box equipped with two two-port NICs. One with two 1Gb ports, the other with two 10Gb ports, each of the ports can be subdivided into four sub-ports. I think the sub-ports are independent PCI device/functions.
    I have 2 quad-cores CPU, 8GB Memory.
    No matter how hard I tried, I just can not make this box to spill out 130M packets/second steadily. The packets are short, 64 to 256 bytes each (UDP).
    It seems there is some locking and/or spinning that prevents the ports from flowing at their maximum capacity.
    I just canít figure out where the bottle neck is.
    I used the latest kernel version

    Does anyone have any suggestions?

  2. #2
    redhead's Avatar
    Join Date
    Jun 2001
    Copenhagen, Denmark
    If you're sending data stored on a SATA disk, I'm suspecting that to be your botleneck, eventho it promises 300MB/s in theory anything more then 150MB/s is imaginary, have you tried sending from /dev/random or /dev/zero ?

    If you want to see if it's the disk, then try testing it like:
    # hdparm -tT /dev/sdx

    # hdparm -tT /dev/sda
     Timing cached reads:   466 MB in  2.01 seconds = 232.36 MB/sec
     Timing buffered disk reads:  218 MB in  3.00 seconds =  72.62 MB/sec
    Here you see the cached read from the disk is 232MB/s but when actualy having to fetc data from the disk, it will drop to 72MB/s.
    Don't worry Ma'am. We're university students, - We know what We're doing.
    'Ruiat coelum, fiat voluntas tua.'
    Datalogi - en livsstil; Intet liv, ingen stil.

  3. #3
    No disk is used at all. One thread is created for every socket, and there is a buffer of 4096 bytes for every thread.
    The same buffer is used to transmit over the same socket. Only the length of message is varied randomly in the range of 64 to 256 bytes. There should not be any bottle neck if every socket is bind to a dedicated NIC. But surely there is some bottle necks there. I used "pktgen" of the kernel (Packet generating), it can pump out 770000 packets/second over one port, but not much more if it works on two NIC ports.

  4. #4
    Join Date
    Dec 2004
    Have you tried playing the wsize/rsize settings under /proc/sys/net/core? There's another section specific to udp window & memory settings under /proc/sys/net/ipv4/udp*


  5. #5

    I think some locking in kernel prevents the NIC ports

    I think some locking in the kernel prevents the ports from outputting packets at their maximum capacity. I subscribed to Linux kernel and hope to get answers there. Theoretically two independent ports should add up almost linearly to the total packets they can output, given enough CPU and memory resources, but it is not the case from my experiment.

  6. #6
    Join Date
    Dec 2004
    What or where is the locking happening?

    Without knowing exactly what settings you have, I would guess that tying each ethernet interface's irq to a specific processor improve things. Also if you're using a recent kernel, you can play with the different schedulers and see which ones you're getting better performance on.

  7. #7

    The "pktgen", packet generating kernel module

    I used the packet generating kernel module. It uses kernel threads bound to CPU to output the packets. Given two independent NIC ports and two such kernel threads, I expect that the total number packets per second should add up linearly. I just can't figure out why this is not the case. I plan to do more experiments in the hope to find it out.

  8. #8
    Join Date
    Dec 2004
    How about the interrupts for the interface? Did you bind that to a specific processor too?

  9. #9
    What kind of PCI bus are these cards connected to?
    Do you have stats for the motherboard?
    63,000 bugs in the code, 63,000 bugs,
    ya get 1 whacked with a service pack,
    now there's 63,005 bugs in the code!!

  10. #10

    The two blades was down. I've just finished reinstalling them.

    The harddisks of the two blades where I was testing the network throughput was plugged out. I borrowed two and have just finished the installation. I plan to stop the daemon of irqbalancing and find out which irq is allocated by which ports and bound to which CPU. I will send the result after I have done that.


Similar Threads

  1. XP performance slowed down
    By mario_dfm in forum Windows - General Topics
    Replies: 2
    Last Post: 05-10-2006, 05:06 AM
  2. performance
    By KenHan in forum Linux Distros
    Replies: 7
    Last Post: 06-15-2003, 04:44 PM
  3. Performance Boost
    By gmoreno in forum Linux - General Topics
    Replies: 7
    Last Post: 12-09-2002, 07:20 PM
  4. Remote X: bad performance
    By Bartman in forum Linux - Software, Applications & Programming
    Replies: 12
    Last Post: 09-22-2002, 10:39 PM
  5. Performance Tuning
    By Kernel_Killer in forum Linux - Hardware, Networking & Security
    Replies: 2
    Last Post: 09-17-2002, 07:57 AM


Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts