'bonding'에 해당되는 글 2건

  1. 2010.12.18 ethernet bonding on ubuntu 10.04
  2. 2010.11.13 채널 본딩 / bonding / NIC Teaming + PCI-X
Linux/Ubuntu2010. 12. 18. 22:01
intel pro/100+ 가 intel PROset 에서 intel PRO server adaptor가 마스터로 존재하지 않으면
teaming(티밍)이 안되자 좌절한 구차니군 =_=

혹시나 리눅스 채널 본딩을 찾아보니 무언가 나온다.
우분투 10.04에서는
$ ifenslave
'ifenslave' 프로그램은 현재 설치되어 있지 않습니다.  다음을 입력하여 이를 설치할 수 있습니다:
sudo apt-get install ifenslave-2.6
이라고 입력하면 해당 프로그램이 설치되고

 $ sudo modprobe bonding
이라고 치면, 자동으로 인식된다.

설정은 직접해보고 추가예정



$ sudo modprobe bonding

$ dmesg | tail -15
[  325.324377] Ethernet Channel Bonding Driver: v3.5.0 (November 4, 2008)
[  325.324388] bonding: Warning: either miimon or arp_interval and arp_ip_target module parameters must be specified, otherwise bonding will not detect link failures! see bonding.txt for details.

$ ll /proc/net/bonding/bond0
-r--r--r-- 1 root root 0 2010-12-19 00:29 /proc/net/bonding/bond0

$ cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.5.0 (November 4, 2008)

Bonding Mode: load balancing (round-robin)
MII Status: down
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0

$ ifconfig
eth0      Link encap:Ethernet  HWaddr 11:11:11:11:11:11 
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
          Interrupt:23 Base address:0x8000

eth1      Link encap:Ethernet  HWaddr 11:11:11:11:11:11 
          inet addr:192.168.0.198  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::207:e9ff:fe13:38fc/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:13 errors:0 dropped:0 overruns:0 frame:0
          TX packets:33 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:3491 (3.4 KB)  TX bytes:7120 (7.1 KB)

eth2      Link encap:Ethernet  HWaddr 11:11:11:11:11:11 
          inet addr:192.168.0.155  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::207:e9ff:fe13:378d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1525 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1250 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1394038 (1.3 MB)  TX bytes:196987 (196.9 KB)

lo        Link encap:Local Loopback 
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:480 (480.0 B)  TX bytes:480 (480.0 B)

$ sudo ip addr add 192.168.0.254/24 brd + dev bond0

$ sudo ip link set dev bond0 up

$ ifconfig
bond0     Link encap:Ethernet  HWaddr 00:00:00:00:00:00 
          inet addr:192.168.0.254  Bcast:192.168.0.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth0      Link encap:Ethernet  HWaddr 11:11:11:11:11:11 
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
          Interrupt:23 Base address:0x8000

eth1      Link encap:Ethernet  HWaddr 11:11:11:11:11:11 
          inet addr:192.168.0.198  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::207:e9ff:fe13:38fc/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:13 errors:0 dropped:0 overruns:0 frame:0
          TX packets:33 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:3491 (3.4 KB)  TX bytes:7120 (7.1 KB)

eth2      Link encap:Ethernet  HWaddr 11:11:11:11:11:11 
          inet addr:192.168.0.155  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::207:e9ff:fe13:378d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3038 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2100 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:3396837 (3.3 MB)  TX bytes:352314 (352.3 KB)

lo        Link encap:Local Loopback 
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:480 (480.0 B)  TX bytes:480 (480.0 B)

$ sudo ifenslave bond0 eth1 eth2

나가는게 안됨.. 라우팅 문제가 있는듯?

에러모음 - 권한문제
$ ip link set dev bond0 up
RTNETLINK answers: Operation not permitted

$ ifenslave bond0 eth1 eth2
Slave 'eth1': Error: bring interface down failed
Master 'bond0', Slave 'eth1': Error: Enslave failed
Slave 'eth2': Error: bring interface down failed
Master 'bond0', Slave 'eth2': Error: Enslave failed

[링크 : http://linux.die.net/man/8/ifenslave]
[링크 : http://linux-ip.net/html/ether-bonding.html]
[링크 : http://www.linuxfoundation.org/collaborate/workgroups/networking/bonding]
    [링크 : http://www.linuxfoundation.org/collaborate/workgroups/networking/bonding#Potential_Sources_of_Trouble]
[링크 : http://georgia.ubuntuforums.org/showthread.php?t=99668]

[링크 : http://ubuntuforums.org/showthread.php?t=201713]
[링크 : http://ubuntuforums.org/showthread.php?t=864657]
[링크 : http://www.smallnetbuilder.com/lanwan/lanwan-howto/30575-how-to-set-up-server-nic-teaming]

[링크 : http://linuxchannel.net/docs/ethernet-channel-bonding.txt]
[링크 : https://help.ubuntu.com/community/UbuntuBonding]
[링크 : https://help.ubuntu.com/community/LinkAggregation]
Posted by 구차니
하드웨어/Network 장비2010. 11. 13. 12:19
문득 네트워크 대역폭을 느릴 이유가 있어서
본딩을 검색하다 보니 Teaming이라는 용어가 나오길래 한번 검색을 해보았다.

근원은 집합연결 이고
하위 기술로 Ethernet bonding, NIC teaming 등이 존재한다.
이러한 기술의 근원은 과거 네트워크가 느렸기 때문 속도의 제약을 뛰어넘고,
안정성의 확보를 위해(단일 링크일 경우 하나만 끊어지면 전체망이 죽어 버리므로) 사용해왔다고 한다.

Other terms for link aggregation include Ethernet bonding, NIC teaming, Trunking, port channel, link bundling, EtherChannel, Multi-link trunking (MLT), NIC bonding, network bonding,[1] Network Fault Tolerance (NFT), Smartgroup (from ZTE), and EtherTrunk (from Huawei).

Link aggregation addresses two problems with Ethernet connections: bandwidth limitations and lack of resilience.

  With regard to the first issue: bandwidth requirements do not scale linearly. Ethernet bandwidths historically have increased by an order of magnitude each generation: 10 Megabit/s, 100 Mbit/s, 1000 Mbit/s, 10,000 Mbit/s. If one started to bump into bandwidth ceilings, then the only option was to move to the next generation which could be cost prohibitive. An alternative solution, introduced by many of the network manufacturers in the early 1990s, is to combine two physical Ethernet links into one logical link via channel bonding. Most of these solutions required manual configuration and identical equipment on both sides of the aggregation.[2]

  The second problem involves the three single points of failure in a typical port-cable-port connection. In either the usual computer-to-switch or in a switch-to-switch configuration, the cable itself or either of the ports the cable is plugged into can fail. Multiple physical connections can be made, but many of the higher level protocols were not designed to failover completely seamlessly.

[링크 : http://en.wikipedia.org/wiki/Link_aggregation]

요즘 고속 무선은 이러한 네트워크 결합기능을 이용하여 갑자기 속도가 이렇게 팍팍 오른듯
On 802.11 (Wi-Fi) channel bonding is used in "Super G" technology, also referred as 108Mbit/s. It bonds two channels of classic 802.11g, which has 54Mbit/s signaling rate.

[링크 : http://en.wikipedia.org/wiki/Channel_bonding]

RAID 와 비슷하지만, 아무튼 RAIN 이라는 녀석도 존재!
[링크 : http://en.wikipedia.org/wiki/Redundant_Array_of_Inexpensive_Nodes) RAIN




이런 이유로 랜카드 구경하다가 발견한 괴물같은 녀석 -_-
무려 가격이 20만원대!
그런데 PCI / PCI-X 라는데 Ex도 아니고 뭐지!?

아래 광고(!)에 의하면
32 / 64bit
33/66/100/133Mhz
를 지원한다고 하는데 PCI는 32bit에 33Mhz 므로 PCI로는 제성능을 못낼듯 -_-
이런건 언넝 PCI-Ex 1x 자리로 나와주어야 하는거 아냐?


Year created 1998
Created by IBM, HP, and Compaq
Superseded by PCI Express (2004)
Width in bits 64
Capacity 1064 MB/s
Style Parallel
Hotplugging interface yes



[링크 : http://en.wikipedia.org/wiki/PCI-X]

Posted by 구차니