Chapter 25. Configuring SLI and Multi-GPU FrameRendering
______________________________________________________________________________
The NVIDIA Linux driver contains support for NVIDIA SLI FrameRendering and
NVIDIA Multi-GPU FrameRendering. Both of these technologies allow an OpenGL
application to take advantage of multiple GPUs to improve visual performance.
The distinction between SLI and Multi-GPU is straightforward. SLI is used to
leverage the processing power of GPUs across two or more graphics cards, while
Multi-GPU is used to leverage the processing power of two GPUs colocated on
the same graphics card. If you want to link together separate graphics cards,
you should use the "SLI" X config option. Likewise, if you want to link
together GPUs on the same graphics card, you should use the "MultiGPU" X
config option. If you have two cards, each with two GPUs, and you wish to link
them all together, you should use the "SLI" option.
25A. RENDERING MODES
In Linux, with two GPUs SLI and Multi-GPU can both operate in one of three
modes: Alternate Frame Rendering (AFR), Split Frame Rendering (SFR), and
Antialiasing (AA). When AFR mode is active, one GPU draws the next frame while
the other one works on the frame after that. In SFR mode, each frame is split
horizontally into two pieces, with one GPU rendering each piece. The split
line is adjusted to balance the load between the two GPUs. AA mode splits
antialiasing work between the two GPUs. Both GPUs work on the same scene and
the result is blended together to produce the final frame. This mode is useful
for applications that spend most of their time processing with the CPU and
cannot benefit from AFR.
With four GPUs, the same options are applicable. AFR mode cycles through all
four GPUs, each GPU rendering a frame in turn. SFR mode splits the frame
horizontally into four pieces. AA mode splits the work between the four GPUs,
allowing antialiasing up to 64x. With four GPUs SLI can also operate in an
additional mode, Alternate Frame Rendering of Antialiasing. (AFR of AA). With
AFR of AA, pairs of GPUs render alternate frames, each GPU in a pair doing
half of the antialiasing work. Note that these scenarios apply whether you
have four separate cards or you have two cards, each with two GPUs.
With some GPU configurations, there is in addition a special SLI Mosaic Mode
to extend a single X screen transparently across all of the available display
outputs on each GPU. See below for the exact set of configurations which can
be used with SLI Mosaic Mode.
25B. ENABLING MULTI-GPU
Multi-GPU is enabled by setting the "MultiGPU" option in the X configuration
file; see Appendix B for details about the "MultiGPU" option.
The nvidia-xconfig utility can be used to set the "MultiGPU" option, rather
than modifying the X configuration file by hand. For example:
% nvidia-xconfig --multigpu=on
25C. ENABLING SLI
SLI is enabled by setting the "SLI" option in the X configuration file; see
Appendix B for details about the SLI option.
The nvidia-xconfig utility can be used to set the SLI option, rather than
modifying the X configuration file by hand. For example:
% nvidia-xconfig --sli=on
25D. ENABLING SLI MOSAIC MODE
The simplest way to configure SLI Mosaic Mode using a grid of monitors is to
use 'nvidia-settings' (see Chapter 24). The steps to perform this
configuration are as follows:
1. Connect each of the monitors you would like to use to any connector from
any GPU used for SLI Mosaic Mode. If you are going to use fewer monitors
than there are connectors, connect one monitor to each GPU before adding
a second monitor to any GPUs.
2. Install the NVIDIA display driver set.
3. Configure an X screen to use the "nvidia" driver on at least one of the
GPUs (see Chapter 6 for more information).
4. Start X.
5. Run 'nvidia-settings'. You should see a tab in the left pane of
nvidia-settings labeled "SLI Mosaic Mode Settings". Note that you may
need to expand the entry for the X screen you configured earlier.
6. Check the "Use SLI Mosaic Mode" check box.
7. Select the monitor grid configuration you'd like to use from the "display
configuration" dropdown.
8. Choose the resolution and refresh rate at which you would like to drive
each individual monitor.
9. Set any overlap you would like between the displays.
10. Click the "Save to X Configuration File" button. NOTE: If you don't have
permissions to write to your system's X configuration file, you will be
prompted to choose a location to save the file. After doing so, you MUST
copy the X configuration file into a location the X server will consider
upon startup (usually '/etc/X11/xorg.conf' for X.Org servers or
'/etc/X11/XF86Config' for XFree86 servers).
11. Exit nvidia-settings and restart your X server.
Alternatively, nvidia-xconfig can be used to configure SLI Mosaic Mode via a
command like 'nvidia-xconfig --sli=Mosaic --metamodes=METAMODES' where the
METAMODES string specifies the desired grid configuration. For example:
will configure four DFPs in a 2x2 configuration, each running at 1920x1024,
with the two DFPs on GPU-0 driving the top two monitors of the 2x2
configuration, and the two DFPs on GPU-1 driving the bottom two monitors of
the 2x2 configuration.
See the MetaModes X configuration description in details in Chapter 13. See
Appendix C for further details on GPU and Display Device Names.
25E. HARDWARE REQUIREMENTS
SLI functionality requires:
o Identical PCI-Express graphics cards
o A supported motherboard (with the exception of Quadro Plex)
o In most cases, a video bridge connecting the two graphics cards
o To use SLI Mosaic Mode, the GPUs must either be part of a Quadro Plex
Visual Computing System (VCS) Model IV or newer, or each GPU must be
Quadro FX 5800, or Quadro Fermi or newer.
For the latest in supported SLI and Multi-GPU configurations, including SLI-
and Multi-GPU capable GPUs and SLI-capable motherboards, see
http://www.slizone.com.
25F. OTHER NOTES AND REQUIREMENTS
The following other requirements apply to SLI and Multi-GPU:
o Mobile GPUs are NOT supported
o SLI on Quadro-based graphics cards always requires a video bridge
o TwinView is also not supported with SLI or Multi-GPU. Only one display
can be used when SLI or Multi-GPU is enabled, with the exception of
Mosaic.
o If X is configured to use multiple screens and screen 0 has SLI or
Multi-GPU enabled, the other screens configured to use the nvidia driver
will be disabled. Note that if SLI or Multi-GPU is enabled, the GPUs used
by that configuration will be unavailable for single GPU rendering.
FREQUENTLY ASKED SLI AND MULTI-GPU QUESTIONS
Q. Why is glxgears slower when SLI or Multi-GPU is enabled?
A. When SLI or Multi-GPU is enabled, the NVIDIA driver must coordinate the
operations of all GPUs when each new frame is swapped (made visible). For
most applications, this GPU synchronization overhead is negligible.
However, because glxgears renders so many frames per second, the GPU
synchronization overhead consumes a significant portion of the total time,
and the framerate is reduced.
Q. Why is Doom 3 slower when SLI or Multi-GPU is enabled?
A. The NVIDIA Accelerated Linux Graphics Driver does not automatically detect
the optimal SLI or Multi-GPU settings for games such as Doom 3 and Quake 4.
To work around this issue, the environment variable __GL_DOOM3 can be set
to tell OpenGL that Doom 3's optimal settings should be used. In Bash, this
can be done in the same command that launches Doom 3 so the environment
variable does not remain set for other OpenGL applications started in the
same session:
% __GL_DOOM3=1 doom3
Doom 3's startup script can also be modified to set this environment
variable:
#!/bin/sh
# Needed to make symlinks/shortcuts work.
# the binaries must run with correct working directory
cd "/usr/local/games/doom3/"
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:.
export __GL_DOOM3=1
exec ./doom.x86 "$@"
This environment variable is temporary and will be removed in the future.
Q. Why does SLI or MultiGPU fail to initialize?
A. There are several reasons why SLI or MultiGPU may fail to initialize. Most
of these should be clear from the warning message in the X log file; e.g.:
o "Unsupported bus type"
o "The video link was not detected"
o "GPUs do not match"
o "Unsupported GPU video BIOS"
o "Insufficient PCI-E link width"
The warning message "'Unsupported PCI topology'" is likely due to problems
with your Linux kernel. The NVIDIA driver must have access to the PCI
Bridge (often called the Root Bridge) that each NVIDIA GPU is connected to
in order to configure SLI or MultiGPU correctly. There are many kernels
that do not properly recognize this bridge and, as a result, do not allow
the NVIDIA driver to access this bridge. See the below "How can I determine
if my kernel correctly detects my PCI Bridge?" FAQ for details.
Below are some specific troubleshooting steps to help deal with SLI and
MultiGPU initialization failures.
o Make sure that ACPI is enabled in your kernel. NVIDIA's experience
has been that ACPI is needed for the kernel to correctly recognize
the Root Bridge. Note that in some cases, the kernel's version of
ACPI may still have problems and require an update to a newer kernel.
o Run 'lspci' to check that multiple NVIDIA GPUs can be identified by
the operating system; e.g:
% /sbin/lspci | grep -i nvidia
If 'lspci' does not report all the GPUs that are in your system, then
this is a problem with your Linux kernel, and it is recommended that
you use a different kernel.
Please note: the 'lspci' utility may be installed in a location other
than '/sbin' on your system. If the above command fails with the
error: "'/sbin/lspci: No such file or directory'", please try:
% lspci | grep -i nvidia
, instead. You may also need to install your distribution's
"pciutils" package.
o Make sure you have the most recent SBIOS available for your
motherboard.
o The PCI-Express slots on the motherboard must provide a minimum link
width. Please make sure that the PCI Express slot(s) on your
motherboard meet the following requirements and that you have
connected the graphics board to the correct PCI Express slot(s):
o A dual-GPU board needs a minimum of 8 lanes (i.e. x8 or x16)
o A pair of single-GPU boards requires one of the following
supported link width combinations:
o x16 + x16
o x16 + x8
o x16 + x4
o x8 + x8
Q. How can I determine if my kernel correctly detects my PCI Bridge?
A. As discussed above, the NVIDIA driver must have access to the PCI Bridge
that each NVIDIA GPU is connected to in order to configure SLI or MultiGPU
correctly. The following steps will identify whether the kernel correctly
recognizes the PCI Bridge:
Note that in the first example, bus 81 is connected to Root Bridge
80, but that in the second example there is no Root Bridge 80 and bus
81 is incorrectly connected at the base of the device tree. In the
bad case, the only solution is to upgrade your kernel to one that
properly detects your PCI bus layout.
dos2unix 가 설치가 안되서 부랴부랴 검색 -_-
근데 인증할수 없다라... 무서운 경고를 내뱉진 말란 말이야!
$ sudo apt-get install tofrodos
패키지 목록을 읽는 중입니다... 완료
의존성 트리를 만드는 중입니다
상태 정보를 읽는 중입니다... 완료
다음 새 패키지를 설치할 것입니다:
tofrodos
0개 업그레이드, 1개 새로 설치, 0개 지우기 및 0개 업그레이드 안 함.
20.4k바이트 아카이브를 받아야 합니다.
이 작업 후 86.0k바이트의 디스크 공간을 더 사용하게 됩니다. 경고: 다음 패키지를 인증할 수 없습니다! tofrodos 확인하지 않고 패키지를 설치하시겠습니까 [y/N]? Y
받기:1 http://kr.archive.ubuntu.com/ubuntu/ lucid/main tofrodos 1.7.8.debian.1-2 [20.4kB]
내려받기 20.4k바이트, 소요시간 0초 (31.5k바이트/초)
전에 선택하지 않은 tofrodos 패키지를 선택합니다.
(데이터베이스 읽는중 ...현재 150066개의 파일과 디렉토리가 설치되어 있습니다.)
tofrodos 패키지를 푸는 중입니다 (.../tofrodos_1.7.8.debian.1-2_i386.deb에서) ...
man-db에 대한 트리거를 처리하는 중입니다 ...
tofrodos (1.7.8.debian.1-2) 설정하는 중입니다 ...
근데.. dos2unix는 아니고
fromdos / todos 두가지 프로그램이 설치된다 -_-
귀찮아서 initrd 를 2.6.32.24 걸로 끌어다 썻더니 부팅은 시도하지만
커널과 관련된 파일들이 없어서 부팅이 진행되지는 않는다.
그래서 검색을 해보니, make modules_install 을 해주거나 depmod 를 해주면 되는 것으로 추측된다.
cd linux-2.6.30/drivers/gpu/drm/i915/
patch i915_drv.c /tmp/patch # make any modification you need here
make -C /usr/src/linux-headers-`uname -r` M=`pwd` modules
sudo make -C /usr/src/linux-headers-`uname -r` M=`pwd` modules_install
sudo depmod -a
Options:
-d confdir Specify an alternative configuration directory.
-k Keep temporary directory used to make the image.
-o outfile Write to outfile.
-r root Override ROOT setting in mkinitrd.conf.
See mkinitramfs(8) for further details.
FILES
/etc/initramfs-tools/initramfs.conf
The default configuration file for the script. See initramfs.conf(5)
for a description of the available configuration parameter.
/etc/initramfs-tools/modules
Specified modules will be put in the generated image and loaded when
the system boots. The format - one per line - is identical to that of
/etc/modules, which is described in modules(5).
/etc/initramfs-tools/conf.d
The conf.d directory allows to hardcode bootargs at initramfs build
time via config snippets. This allows to set ROOT or RESUME. This is
especially useful for bootloaders, which do not pass an root bootarg.
/etc/initramfs-tools/DSDT.aml
If this file exists, it will be appended to the initramfs in a way that causes it to be loaded by ACPI.
머 생성해도 달라지는건 별로 없군 ㄱ- busybox 에서 못 넘어 간다 (안해!!!! ㅆㅂ)
다른것들 보다는.. 도대체 왜
FATAL: Could not load /lib/modules/... : No such file or directory
라는 에러가 발생을 하는지 모르겠다.
---
/dev/sda1 대신 /dev/hda1을 바꾸면 될까 했는데.. 역시나 안되고 -_-
(sda1은 SATA 하드 hda1은 IDE용 하드의 식별자이다)
혹시나 해서 initrd의 내용을 열어보니
conf/conf.d/resume 파일의 내용중 UUID의 값이 /boot/grub/grub.cfg 의 내용과 다르게 들어있었다.
그리고 이 내용은
/etc/initramfs-tools/conf.d/resume에 들어있던 내용이고
mkinitramfs 에 의해 이 내용이 들어간 것으로 보인다.
Extracting initrd image
Initrd image is just cpio-gzip archive. So to extract it: