'채널 본딩'에 해당되는 글 2건

  1. 2011.12.15 ifenslave mode 설정하기
  2. 2010.11.13 채널 본딩 / bonding / NIC Teaming + PCI-X
Linux/Ubuntu2011. 12. 15. 22:38
ifenslave를 통한 channel bonding은
modprobe 시에 mode=0 과 같은 옵션을 주어 수행한다는데..
일단 기본값으로는 0번(round robin)이 되어 있고 다음과 같이 숫자를 주어 다른 모드로 시작할 수 있다.
# modprobe bonding mode=6 
# cat /proc/net/bonding/bond0 
Ethernet Channel Bonding Driver: v3.5.0 (November 4, 2008)

Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth1
MII Status: up
Link Failure Count: 1
Permanent HW addr: 00:00:00:00:00:00

Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:00:00:00:00:00 

만약 작동중에 mode를 변경하고 싶다면
다음과 같이 /sys/class/net/bond0/bonding/mode 파일에 원하는 mode 값을 넣어주면 된다. 
To configure bond0 for balance-alb mode:
# ifconfig bond0 down
# echo 6 > /sys/class/net/bond0/bonding/mode
 - or -
# echo balance-alb > /sys/class/net/bond0/bonding/mode
NOTE: The bond interface must be down before the mode can be changed.

[링크 : http://www.kernel.org/doc/Documentation/networking/bonding.txt



Posted by 구차니
하드웨어/Network 장비2010. 11. 13. 12:19
문득 네트워크 대역폭을 느릴 이유가 있어서
본딩을 검색하다 보니 Teaming이라는 용어가 나오길래 한번 검색을 해보았다.

근원은 집합연결 이고
하위 기술로 Ethernet bonding, NIC teaming 등이 존재한다.
이러한 기술의 근원은 과거 네트워크가 느렸기 때문 속도의 제약을 뛰어넘고,
안정성의 확보를 위해(단일 링크일 경우 하나만 끊어지면 전체망이 죽어 버리므로) 사용해왔다고 한다.

Other terms for link aggregation include Ethernet bonding, NIC teaming, Trunking, port channel, link bundling, EtherChannel, Multi-link trunking (MLT), NIC bonding, network bonding,[1] Network Fault Tolerance (NFT), Smartgroup (from ZTE), and EtherTrunk (from Huawei).

Link aggregation addresses two problems with Ethernet connections: bandwidth limitations and lack of resilience.

  With regard to the first issue: bandwidth requirements do not scale linearly. Ethernet bandwidths historically have increased by an order of magnitude each generation: 10 Megabit/s, 100 Mbit/s, 1000 Mbit/s, 10,000 Mbit/s. If one started to bump into bandwidth ceilings, then the only option was to move to the next generation which could be cost prohibitive. An alternative solution, introduced by many of the network manufacturers in the early 1990s, is to combine two physical Ethernet links into one logical link via channel bonding. Most of these solutions required manual configuration and identical equipment on both sides of the aggregation.[2]

  The second problem involves the three single points of failure in a typical port-cable-port connection. In either the usual computer-to-switch or in a switch-to-switch configuration, the cable itself or either of the ports the cable is plugged into can fail. Multiple physical connections can be made, but many of the higher level protocols were not designed to failover completely seamlessly.

[링크 : http://en.wikipedia.org/wiki/Link_aggregation]

요즘 고속 무선은 이러한 네트워크 결합기능을 이용하여 갑자기 속도가 이렇게 팍팍 오른듯
On 802.11 (Wi-Fi) channel bonding is used in "Super G" technology, also referred as 108Mbit/s. It bonds two channels of classic 802.11g, which has 54Mbit/s signaling rate.

[링크 : http://en.wikipedia.org/wiki/Channel_bonding]

RAID 와 비슷하지만, 아무튼 RAIN 이라는 녀석도 존재!
[링크 : http://en.wikipedia.org/wiki/Redundant_Array_of_Inexpensive_Nodes) RAIN




이런 이유로 랜카드 구경하다가 발견한 괴물같은 녀석 -_-
무려 가격이 20만원대!
그런데 PCI / PCI-X 라는데 Ex도 아니고 뭐지!?

아래 광고(!)에 의하면
32 / 64bit
33/66/100/133Mhz
를 지원한다고 하는데 PCI는 32bit에 33Mhz 므로 PCI로는 제성능을 못낼듯 -_-
이런건 언넝 PCI-Ex 1x 자리로 나와주어야 하는거 아냐?


Year created 1998
Created by IBM, HP, and Compaq
Superseded by PCI Express (2004)
Width in bits 64
Capacity 1064 MB/s
Style Parallel
Hotplugging interface yes



[링크 : http://en.wikipedia.org/wiki/PCI-X]

Posted by 구차니