Areca ARC-1213-4X Manual de Usario

Areca controlador ARC-1213-4X

Lee a continuación 📖 el manual en español para Areca ARC-1213-4X (198 páginas) en la categoría controlador. Esta guía fue útil para 6 personas y fue valorada con 4.5 estrellas en promedio por 2 usuarios

Página 1/198
ARC-1882 Series
(PCIe 2.0 to 6Gb/s SAS RAID Controllers)
6Gb/s SAS RAID Cards
USER’S Manual
Version: 1.0
Issue Date: December, 2011
Copyright and Trademarks
The information of the products in this manual is subject to change
without prior notice and does not represent a commitment on the part
of the vendor, who assumes no liability or responsibility for any errors
that may appear in this manual. All brands and trademarks are the
properties of their respective owners. This manual contains materials
protected under International Copyright Conventions. All rights
reserved. No part of this manual may be reproduced in any form or by
any means, electronic or mechanical, including photocopying, without
the written permission of the manufacturer and the author.
FCC Statement
This equipment has been tested and found to comply with the lim-
its for a Class B digital device, pursuant to part 15 of the FCC Rules.
These limits are designed to provide reasonable protection against in-
terference in a residential installation. This equipment generates, uses,
and can radiate radio frequency energy and, if not installed and used
in accordance with the instructions, may cause harmful interference to
radio communications. However, there is no guarantee that interfer-
ence will not occur in a particular installation.
Manufacturer’s Declaration for CE Certication
We conrm ARC-1882 series have been tested and found comply with
the requirements set up in the council directive on the approximation
of the low of member state relating to the EMC Directive2004/108/EC.
For the evaluation regarding to the electromag-netic compatibility, the
following standards where applied:
EN 55022: 2006, Class B
EN 61000-3-2: 2006
EN 61000-3-3: 1995+A1: 2001+A2: 2005
EN 55024:1998+A1:2001=A2:2003
IEC61000-4-2: 2001
IEC61000-4-3: 2006
IEC61000-4-4: 2004
IEC61000-4-5: 2005
IEC61000-4-6: 2006
IEC61000-4-8: 2001
IEC61000-4-11: 2004
Contents
1. Introduction .............................................................. 10
1.1 Overview ....................................................................... 10
1.2 Features ........................................................................ 12
2. Hardware Installation ............................................... 16
2.1 Before You First Installing................................................. 16
2.2 Board Layout .................................................................. 16
2.3 Installation ..................................................................... 22
2.4 SAS Cables .................................................................... 29
2.4.1 Internal Min SAS 4i to SATA Cable ............................... 29
2.4.2 Internal Min SAS 4i to 4xSFF-8482 Cable ....................... 30
2.4.3 Internal Min SAS 4i (SFF-8087) to Internal Min SAS 4i (SFF-
8087) cable ....................................................................... 31
2.4.4 External Min SAS 4x Drive Boxes and Drive Expanders .... 32
2.5 LED Cables ..................................................................... 32
2.5.1 Recognizing a Drive Failure .......................................... 36
2.5.2 Replacing a Failed Drive .............................................. 37
2.6 Summary of the installation .............................................. 37
3. McBIOS RAID Manager .............................................. 40
3.1 Starting the McBIOS RAID Manager ................................... 40
3.2 McBIOS RAID manager .................................................... 41
3.3 Conguring Raid Sets and Volume Sets .............................. 42
3.4 Designating Drives as Hot Spares ...................................... 42
3.5 Using Quick Volume /Raid Setup Conguration .................... 43
3.6 Using Raid Set/Volume Set Function Method ....................... 44
3.7 Main Menu .................................................................... 46
3.7.1 Quick Volume/Raid Setup ............................................ 47
3.7.2 Raid Set Function ....................................................... 51
3.7.2.1 Create Raid Set .................................................... 52
3.7.2.2 Delete Raid Set ..................................................... 53
3.7.2.3 Expand Raid Set .................................................... 54
Migrating ...................................................................... 55
3.7.2.4 Ofine Raid Set ..................................................... 55
3.7.2.5 Activate Incomplete Raid Set ................................... 56
3.7.2.6 Create Hot Spare ................................................... 57
3.7.2.7 Delete Hot Spare ................................................... 57
3.7.2.8 Rescue Raid Set ................................................... 58
3.7.2.9 Raid Set Information .............................................. 59
3.7.3 Volume Set Function ................................................... 59
3.7.3.1 Create Volume Set (0/1/10/3/5/6) ........................... 60
• Volume Name ................................................................ 62
• Capacity ....................................................................... 63
• Stripe Size .................................................................... 65
• SCSI ID ........................................................................ 66
• Cache Mode .................................................................. 67
• Tag Queuing .................................................................. 67
3.7.3.2 Create Raid30/50/60 (Volume Set 30/50/60) ............ 68
3.7.3.3 Delete Volume Set ................................................. 69
3.7.3.4 Modify Volume Set ................................................. 69
3.7.3.5 Check Volume Set .................................................. 71
3.7.3.6 Stop Volume Set Check .......................................... 72
3.7.3.7 Display Volume Set Info. ........................................ 72
3.7.4 Physical Drives ........................................................... 73
3.7.4.1 View Drive Information .......................................... 73
3.7.4.2 Create Pass-Through Disk ....................................... 74
3.7.4.3 Modify Pass-Through Disk ....................................... 74
3.7.4.4 Delete Pass-Through Disk ....................................... 74
3.7.4.5 Identify Selected Drive ........................................... 75
3.7.4.6 Identify Enclosure .................................................. 75
3.7.5 Raid System Function ................................................. 76
3.7.5.1 Mute The Alert Beeper ............................................ 77
3.7.5.2 Alert Beeper Setting ............................................... 77
3.7.5.3 Change Password .................................................. 78
3.7.5.4 JBOD/RAID Function .............................................. 78
3.7.5.5 Background Task Priority ........................................ 79
3.7.5.6 SATA NCQ Support ................................................. 80
3.7.5.7 HDD Read Ahead Cache .......................................... 80
3.7.5.8 Volume Data Read Ahead ........................................ 81
3.7.5.9 Hdd Queue Depth Setting ....................................... 81
3.7.5.10 Empty HDD Slot LED ............................................ 82
3.7.5.11 Controller Fan Detection ....................................... 83
3.7.5.12 Auto Activate Raid Set .......................................... 83
3.7.5.13 Disk Write Cache Mode ......................................... 84
3.7.5.14 Capacity Truncation .............................................. 84
3.7.6 HDD Power Management ............................................. 85
3.7.6.1 Stagger Power On .................................................. 86
3.7.6.2 Time to Hdd Low Power Idle ................................... 87
3.7.6.3 Time To Low RPM Mode ......................................... 87
3.6.7.4 Time To Spin Down Idle Hdd .................................. 88
3.7.7 Ethernet Conguration ............................................... 89
3.7.7.1 DHCP Function ...................................................... 89
3.7.7.2 Local IP address .................................................... 90
3.7.7.3 HTTP Port Number ................................................. 91
3.7.7.4 Telnet Port Number ................................................ 91
3.7.7.5 SMTP Port Number ................................................. 92
3.7.8 View System Events ................................................... 93
3.7.9 Clear Events Buffer ..................................................... 93
3.7.10 Hardware Monitor ..................................................... 94
3.7.11 System Information .................................................. 94
4. Driver Installation ..................................................... 95
4.1 Creating the Driver Diskettes ............................................ 95
4.2 Driver Installation for Windows ......................................... 97
4.2.1 New Storage Device Drivers in Windows 7/2008/Vista/2003
97
4.2.2 Install Windows 7/2008/Vista/XP/2003 on a 6Gb/s SAS
RAID Volume ..................................................................... 97
4.2.2.1 Installation Procedures ........................................... 97
4.2.2.2 Making Volume Sets Available to Windows System ..... 99
4.2.3 Installing controller into an existing Windows 7/2008/Vista/
XP/2003 Installation ........................................................... 99
4.2.3.1 Making Volume Sets Available to Windows System ... 101
4.2.4 Uninstall controller from Windows 7/2008/Vista/2003/XP ....
101
4.3 Driver Installation for Linux ............................................ 102
4.4 Driver Installation for FreeBSD ........................................ 103
4.5 Driver Installation for Solaris .......................................... 103
4.6 Driver Installation for Mac X ........................................... 103
4.6.1 Installation Procedures .............................................. 104
4.6.2 Making Volume Sets Available to Mac OS X .................. 105
5. ArcHttp Proxy Server Installation ............................ 106
5.1 For Windows................................................................. 107
5.2 For Linux ..................................................................... 108
5.3 For FreeBSD ................................................................. 110
5.4 For Solaris 10 X86 ......................................................... 110
5.5 For Mac OS 10.X ........................................................... 111
5.6 ArcHttp Conguration .................................................... 111
6. Web Browser-based Conguration ......................... 116
6.1 Start-up McRAID Storage Manager ................................. 116
• Start-up McRAID Storage Manager from Windows Local
Administration ................................................................ 117
• Start-up McRAID Storage Manager from Linux/FreeBSD/So-
laris/Mac Local Administration .......................................... 118
• Start-up McRAID Storage Manager Through Ethernet Port
(Out-of-Band) ............................................................... 118
6.2 6Gb/s SAS RAID controller McRAID Storage Manager ......... 119
6.3 Main Menu .................................................................. 120
6.4 Quick Function .............................................................. 120
6.5 Raid Set Functions ........................................................ 121
6.5.1 Create Raid Set ....................................................... 121
6.5.2 Delete Raid Set ........................................................ 122
6.5.3 Expand Raid Set ....................................................... 122
6.5.4 Ofine Raid Set ........................................................ 123
6.5.5 Rename Raid Set ...................................................... 124
6.5.6 Activate Incomplete Raid Set ..................................... 124
6.5.7 Create Hot Spare ..................................................... 125
6.5.8 Delete Hot Spare ...................................................... 126
6.5.9 Rescue Raid Set ....................................................... 126
6.6 Volume Set Functions .................................................... 127
6.6.1 Create Volume Set (0/1/10/3/5/6) ............................. 127
6.6.2 Create Raid30/50/60 (Volume Set 30/50/60) ............... 130
6.6.3 Delete Volume Set .................................................... 131
6.6.4 Modify Volume Set .................................................... 132
6.6.4.1 Volume Growth ................................................... 132
6.6.4.2 Volume Set Migration ........................................... 133
6.6.5 Check Volume Set .................................................... 134
6.6.6 Schedule Volume Check ............................................ 134
6.7 Physical Drive .............................................................. 135
6.7.1 Create Pass-Through Disk .......................................... 135
6.7.2 Modify Pass-Through Disk .......................................... 136
6.7.3 Delete Pass-Through Disk .......................................... 137
6.7.4 Identify Enclosure .................................................... 137
6.7.5 Identify Drive .......................................................... 137
6.8 System Controls ........................................................... 138
6.8.1 System Cong ......................................................... 138
• System Beeper Setting ................................................. 139
• Background Task Priority ............................................... 139
• JBOD/RAID Conguration .............................................. 139
• SATA NCQ Support ....................................................... 139
• HDD Read Ahead Cache ................................................ 139
• Volume Data Read Ahead ............................................. 139
• HDD Queue Depth ....................................................... 140
• Empty HDD Slot LED .................................................... 140
• CPU Fan Detection ........................................................ 140
• SES2 Support .............................................................. 140
• Max Command Length .................................................. 141
• Auto Activate Incomplete Raid ....................................... 141
• Disk Write Cache Mode ................................................. 141
• Disk Capacity Truncation Mode ....................................... 141
6.8.2 Advanced Conguration ............................................. 142
6.8.3 HDD Power Management ........................................... 145
6.8.3.1 Stagger Power On Control ..................................... 145
6.8.3.2 Time to Hdd Low Power Idle ................................. 146
6.8.3.3 Time To Hdd Low RPM Mode ................................. 146
6.8.3.4 Time To Spin Down Idle HDD ................................. 146
6.8.3.5 SATA Power Up In Standby ................................... 146
6.8.4 Ethernet Conguration ............................................. 147
6.8.5 Alert By Mail Conguration ....................................... 148
6.8.6 SNMP Conguration .................................................. 149
6.8.7 NTP Conguration .................................................... 149
6.8.8 View Events/Mute Beeper .......................................... 150
6.8.9 Generate Test Event ................................................. 151
6.8.10 Clear Events Buffer ................................................. 151
6.8.11 Modify Password ..................................................... 152
6.8.12 Update Firmware ................................................... 153
6.9 Information .................................................................. 153
6.9.1 Raid Set Hierarchy .................................................... 153
6.9.2 SAS Chip Information ............................................... 154
6.9.4 Hardware Monitor ..................................................... 155
Appendix A ................................................................. 156
Upgrading Flash ROM Update Process .................................... 156
Appendix B .................................................................. 160
Battery Backup Module (ARC-6120BA-T121) ........................... 160
Appendix C .................................................................. 164
SNMP Operation & Installation .............................................. 164
Appendix D .................................................................. 176
Event Notication Congurations ........................................ 176
A. Device Event .............................................................. 176
B. Volume Event ............................................................. 177
C. RAID Set Event .......................................................... 178
D. Hardware Monitor Event .............................................. 178
Appendix E .................................................................. 180
RAID Concept .................................................................... 180
RAID Set ......................................................................... 180
Volume Set ...................................................................... 180
Ease of Use Features ......................................................... 181
• Foreground Availability/Background Initialization .............. 181
• Online Array Roaming ................................................... 181
• Online Capacity Expansion ............................................. 181
• Online Volume Expansion .............................................. 184
High availability .................................................................. 184
• Global/Local Hot Spares .................................................. 184
• Hot-Swap Disk Drive Support ........................................... 185
• Auto Declare Hot-Spare ................................................. 185
• Auto Rebuilding ............................................................ 186
• Adjustable Rebuild Priority ............................................... 186
High Reliability ................................................................... 187
• Hard Drive Failure Prediction ............................................ 187
• Auto Reassign Sector ...................................................... 187
• Consistency Check ......................................................... 188
Data Protection .................................................................. 188
• Battery Backup ............................................................. 188
• Recovery ROM ............................................................... 189
Appendix F .................................................................. 190
Understanding RAID .......................................................... 190
RAID 0 ............................................................................ 190
RAID 1 ............................................................................ 191
RAID 10(1E) .................................................................... 192
RAID 3 ............................................................................ 192
RAID 5 ............................................................................ 193
RAID 6 ............................................................................ 194
RAID p8-x0 .......................................................................... 194
JBOD .............................................................................. 195
Single Disk (Pass-Through Disk) ......................................... 195
INTRODUCTION
10
1. Introduction
This section presents a brief overview of the 6Gb/s SAS RAID control-
ler, ARC-1882 series. (PCIe 2.0 to 6Gb/s SAS RAID controllers)
1.1 Overview
SAS 2.0 is designed for much higher speed data transfer than pre-
vious available and backward compatibility with SAS 1.0. The
6Gb/s SAS interface supports both 6Gb/s and 3Gb/s SAS/SATA
disk drives for data-intensive applications and 6Gb/s or 3Gb/s
SATA drives for low-cost bulk storage of reference data. The
ARC-1882 family includes 8 ports low prole as well as 12/16/24
internal ports with additional 4 external ports models. The ARC-
1882LP/1882i/1882x support eight 6Gb/s SAS ports via one inter-
nal & one external/two internal/two external mini SAS connector.
The ARC-1882ix-12/16/24 attaches directly to SATA/SAS midplanes
with 3/4/6 SFF-8087 internal connector or increase capacity using
one additional SFF-8088 external connector. When used with 6Gb/s
SAS expanders, the controller can provide up to (128) devices
through one or more 6Gb/s SAS JBODs, making it an ideal solution
for enterprise-class storage applications that called for maximum
conguration exibility.
ARC-1882LP/1882i/1882x 6Gb/s RAID controllers are low-prole
PCI cards, ideal for 1U and 2U rack-mount systems. These control-
lers utilize the same RAID kernel that has been eld-proven in ex-
isting external RAID controller products, allowing Areca to quickly
bring stable and reliable PCIe 2.0 6Gb/s SAS RAID controllers to
the market.
Unparalleled Performance
The 6Gb/s SAS RAID controllers raise the standard to higher per-
formance levels with several enhancements including new high
performance dual core ROC Processor, a DDR3-1333 memory
architecture and high performance PCIe 2.0 x8 lane host interface
bus interconnection. The low prole controllers by default sup-
port on-board 1G of ECC DDR3-1333 SDRAM memory. The ARC-
1882ix-12/16/24 controllers each include one 240-pin DIMM socket
INTRODUCTION
11
with default 1GB of ECC DDR3-1333 single rank registered SDRAM
(1Rx8 or 1Rx16), upgradable to 4GB. The optional battery backup
module provides power to the cache if it contains data not yet writ-
ten to the drives when power is lost. The test result is against over-
all performance compared to other 6Gb/s SAS RAID controllers.
The powerful new ROC processors integrated 8 6Gb/s SAS ports on
chip delivers high performance for servers and workstations.
Unsurpassed Data Availability
As storage capacities continue to rapidly increase, users need
greater level of disk drive fault tolerance, which can be implement-
ed without doubling the investment in disk drives. The RAID
6 can offer fault tolerance greater that RAID 1 or RAID 5 but only
consumes the capacity of 2 disk drives for distributed parity data.
The 6Gb/s SAS RAID controllers with extreme performance RAID
6 engine installed provide the highest RAID 6 feature to meet this
requirement. The controller can concurrently compute two parity
blocks and get very similar RAID 5 performance.
The 6Gb/s SAS RAID controllers can also provide RAID levels 0, 1,
1E, 3, 5, 6, 10, 30, 50, 60, Single Disk or JBOD for maximum con-
guration exibility. Its high data availability and protection derives
from the following capabilities: Online RAID Capacity Expansion,
Array Roaming, Online RAID Level / Stripe Size Migration, Global
Online Spare, Automatic Drive Failure Detection, Automatic Failed
Drive Rebuilding, Disk Hot-Swap, Online Background Rebuilding,
Instant Availability/Background Initialization, Auto Reassign Sec-
tor, Redundant Flash Image and Battery Backup Module. Greater
than Two TB Support allows for very large volume set application
in 64-bit environment such as data-mining and managing large
databases.
Maximum Interoperability
The 6Gb/s SAS RAID controller support broad operating system
including Windows 7/2008/Vista/XP/2003, Linux (Open Source),
FreeBSD (Open Source), Solaris (Open Source), Mac, VMware and
more, along with key system monitoring features such as enclosure
management (SES-2, SMP, & SGPIO) and SNMP function. Our prod-
INTRODUCTION
12
ucts and technology are based on extensive testing and validation
process; leverage Areca SAS or SATA RAID controller eld-proven
compatibility with operating systems, motherboards, applications
and device drivers.
Easy RAID Management
The controllers contain an embedded McBIOS RAID manager that
can access via hot key at M/B BIOS boot-up screen. This pre-boot
McBIOS RAID manager can use to simplify the setup and manage-
ment of RAID controller. The controller rmware also contains a
browser-based McRAID storage manager which can be accessed
through the Ethernet port or ArcHttp proxy server in Windows,
Linux, FreeBSD and more environments. The McRAID storage man-
ager allows local and remote to create and modify RAID set, vol-
ume set, and monitor RAID status from standard web browser. The
Single Admin Portal (SAP) monitor utility can support one applica-
tion to scan multiple RAID units in the network.
1.2 Features
Controller Architecture
• Dual Core RAID-on-Chip (ROC) 800 MHz processor
• PCIe 2.0 x8 lane host interface
• 1GB on-board DDR3-1333 SDRAM with ECC (ARC-1882LP/
1882i/1882x)
• One 240-pin DIMM socket with default 1GB of ECC DDR3-1333
single rank registered SDRAM (1Rx8 or 1Rx16), upgradable to
4GB(ARC-1882ix-12/16/24)
• Write-through or write-back cache support
• Support up to 4/8/12/16/24 internal and/or 4/8 external 6Gb/s
SAS ports
• Multi-adapter support for large storage requirements
• BIOS boot support for greater fault tolerance
• BIOS PnP (plug and play) and BBS (BIOS boot specication)
support
• Support EFI BIOS for Mac Pro
• NVRAM for RAID event & transaction log
• Redundant ash image for controller availability
• Battery Backup Module (BBM) ready (Option)
• RoHS compliant
INTRODUCTION
13
RAID Features
• RAID level 0, 1, 10(1E), 3, 5, 6, 30, 50, 60, Single Disk or JBOD
• Multiple RAID selection
• Online array roaming
• Ofine RAID set
• Online RAID level/stripe size migration
• Online capacity expansion and RAID level migration simultane-
ously
• Online volume set growth
• Instant availability and background initialization
• Support global and dedicated hot spare
• Automatic drive insertion/removal detection and rebuilding
• Greater than 2TB capacity per disk drive support
• Greater than 2TB per volume set (64-bit LBA support)
• Support intelligent power management to save energy and
extend service life
• Support NTP protocol synchronize RAID controller clock over the
on board Ethernet port
Monitors/Notication
• System status indication through global HDD activity/fault con-
nector, individual activity/fault connector, LCD/I2C connector and
alarm buzzer
• SMTP support for email notication
• SNMP support for remote manager
• Enclosure management (SES-2, SMP and SGPIO) ready
RAID Management
• Field-upgradeable rmware in ash ROM
In-Band Manager
• Hot key "boot-up" McBIOS RAID manager via M/B BIOS
• Web browser-based McRAID storage manager via ArcHttp proxy
server for all operating systems
• Support Command Line Interface (CLI)
• API library for customer to write monitor utility
• Single Admin Portal (SAP) monitor utility
Out-of-Band Manager
• Firmware-embedded web browser-based McRAID storage man-
ager, SMTP manager, SNMP agent and Telnet function via
Ethernet port
INTRODUCTION
14
• API library for customer to write monitor utility
• Support push button and LCD display panel (option)
Operating System
• Windows 7/2008/Vista/XP/2003
• Linux
• FreeBSD
• VMware
• Solaris 10/11 x86/x86_64
• Mac OS 10.4.x/10.5.x/10.6.x/10.7.x
(For latest supported OS listing visit )http://www.areca.com.tw
6Gb/s SAS RAID controllers
Model name ARC-1882ix-12 ARC-1882ix-16 ARC-1882ix-24
I/O Processor Dual Core RAID-on-Chip 800MHz
Form Factor(H x L) Full Height: 98.4 x 250 mm
Host Bus Type PCIe 2.0 p14-x8 Lanes
Driver Connector 3xSFF-8087
1xSFF-8088
4xSFF-8087
1xSFF-8088
6xSFF-8087
1xSFF-8088
Drive Support Up to 128 6Gb/s and 3Gb/s SAS/SATA HDDs
RAID Level 0, 1, 1E, 3, 5, 6, 10, 30, 50, 60, Single Disk, JBOD
On-Board Cache One 240-pin DIMM socket with default 1GB of ECC DDR3-1333 single rank
registered SDRAM (1Rx8 or 1Rx16), upgradable to 4GB
Management
Port
In-Band: PCIe
Out-of-Band: BIOS, LCD, LAN Port
Enclosure
Ready
Individual Activity/Faulty Header, SGPIO, SMP, SES-2 (For External Port)
INTRODUCTION
15
6Gb/s SAS RAID controllers
Model name ARC-1882i ARC-1882LP ARC-1882x
I/O Processor Dual Core RAID-on-Chip 800MHz
Form Factor(H x L) Low Prole: 64.4 x 169.5 mm
Host Bus Type PCIe 2.0 p15-x8 Lanes
Driver Connector 2xSFF-8087 1xSFF-8087
1xSFF-8088
2xSFF-8088
Drive Support Up to 128 6Gb/s and 3Gb/s SAS/SATA HDDs
RAID Level 0, 1, 1E, 3, 5, 6, 10, 30, 50, 60, Single Disk, JBOD
On-Board Cache 1GB on-board DDR3-1333 SDRAM
Management
Port
In-Band: PCIe
Out-of-Band: BIOS, LCD, LAN Port
Enclosure
Ready
Individual Activity/Faulty Header, SGPIO, SMP, SES-2
Note:
Low-prole bracket has included on the low prole board ship-
ping package.
HARDWARE INSTALLATION
16
2. Hardware Installation
This section describes the procedures for installing the 6Gb/s SAS RAID
controllers.
2.1 Before You First Installing
Thanks for purchasing the 6Gb/s SAS RAID controller as your RAID
data storage subsystem. This user manual gives simple step-by-
step instructions for installing and conguring the 6Gb/s SAS RAID
controller. To ensure personal safety and to protect your equipment
and data, reading the following information package list carefully
before you begin installing.
Package Contents
If your package is missing any of the items listed below, con-
tact your local dealers before you install. (Disk drives and disk
mounting brackets are not included)
• 1 x 6Gb/s SAS RAID controller in an ESD-protective bag
• 1 x Installation CD – containing driver, relative software, an
electronic version of this manual and other related manual
• 1 x User manual
• 1 x Low-prole bracket
2.2 Board Layout
The controller can support a family included 8 ports models as well
as industry-rst 8/12/16/24 internal ports with additional 4 exter-
nal ports. This section provides the board layout and connector/
jumper for the 6Gb/s SAS RAID controller.
HARDWARE INSTALLATION
17
Connector Type Description
1. (J5) Battery Backup Module Connector 12-pin box header
2. (J6) RS232 Port for CLI to congure the
expander functions on the RAID control-
ler (*1)
RJ11 connector
3. (CN1) SAS 25-28 Ports (External) SFF-8088
4. (J9) Ethernet Port RJ45
5. (J7) Individaul Fault LED Header 24-pin header
6. (J8) Individual Activity (HDD) LED Header 24-pin header
7. (J1) Global Fault/Activity LED 4-pin header
8. (J2) I2C/LCD Connector 8-pin header
9. (SCN1) SAS 21-24 Ports (Internal) SFF-8087
10. (SCN2) SAS 17-20 Ports (Internal) SFF-8087
11. (SCN3) SAS 13-16 Ports (Internal) SFF-8087
12. (SCN4) SAS 9-12 Ports (Internal) SFF-8087
13. (SCN5) SAS 5-8 Ports (Internal) SFF-8087
14. (SCN6) SAS 1-4 Ports (Internal) SFF-8087
Table 2-1, ARC-1882ix-12/16/24 connectors
Figure 2-1, ARC-1882ix-12/16/24 6Gb/s SAS RAID controller
Note:
*1: You can download the ARC1880ix_1882ix Expander-CLI.
PDF manual from http://www.areca.com.tw/support/main.htm
to view and set expander conguration.
HARDWARE INSTALLATION
18
Connector Type Description
1. (J7) Ethernet Port RJ45
2. (J6) Individual Fault LED Header 4-pin header
3. (J5) Individual Activity (HDD) LED Header 4-pin header
4. (J4) Global Fault/Activity LED 4-pin header
5. (J2) Battery Backup Module Connector 12-pin box header
6. (J1) Manufacture Purpose Port 12-pin header
7. (J3) I2C/LCD Connector 8-pin header
8. (SCN1) SAS 1-4 Ports (Internal) SFF-8087
9. (SCN2) SAS 5-8 Ports (Internal) SFF-8087
Table 2-2, ARC-1882i connectors
Figure 2-2, ARC-1882i 6Gb/s SAS RAID controller
HARDWARE INSTALLATION
19
Figure 2-3, ARC-1882LP 6Gb/s SAS RAID controller
Connector Type Description
1. (J2) Battery Backup Module Connector 12-pin box header
2. (J1) Manufacture Purpose Port 12-pin header
3. (J3) Individual Fault/Activity LED Header 8-pin header
4. (J4) Global Fault/Activity LED 4-pin header
5. (J5) I2C/LCD Connector 8-pin header
6. (SCN1) SAS 1-4 Ports (External) SFF-8088
7. (SCN2) SAS 5-8 Ports (Internal) SFF-8087
8. (J6) Ethernet port RJ45
Table 2-3, ARC-1882LP connectors
HARDWARE INSTALLATION
20
LED Status
Link LED
(Green light)
When link LED illuminate that indicates the link LED is
connected.
Activity LED
(Blue light)
The activity LED illuminate that indicates the adapter is
active.
The following describes the ARC-1882 series link/activity LED.
Figure 2-4, ARC-1882x 6Gb/s SAS RAID controller
Connector Type Description
1. (J1) Battery Backup Module Connector 12-pin box header
2. (J2) Manufacture Purpose Port 12-pin header
3. (J3) I2C/LCD Connector 8-pin header
4. (J6) Ethernet Port RJ45
5. (SCN1) SAS 5-8 Ports (External) SFF-8088
6. (SCN2) SAS 1-4 Ports (External) SFF-8088
Table 2-4, ARC-1882x connectors
HARDWARE INSTALLATION
21
Tools Required
An ESD grounding strap or mat is required. Also required are stan-
dard hand tools to open your system’s case.
System Requirement
The 6Gb/s SAS RAID controller can be installed in a universal PCIe
slot and requires a motherboard that:
ARC-1882 series 6Gb/s SAS RAID controller requires:
• Comply with the PCIe 2.0 x8 lanes
It can work on the PCIe 2.0 x1, x4, x8, and x16 signal with x8 or
x16 mechanical slot M/B.
• Backward-compatibe with PCIe 1.0
Installation Tools
The following items may be needed to assist with installing the
6Gb/s SAS RAID controller into an available PCIe expansion slot.
• Small screwdriver
• Host system hardware manuals and manuals for the disk or
enclosure being installed.
Personal Safety Instructions
Use the following safety instructions to help you protect your
computer system from potential damage and to ensure your own
personal safety.
• Always wear a grounding strap or work on an ESD-protective
mat.
Warning:
High voltages may be found inside computer equipment. Be-
fore installing any of the hardware in this package or remov-
ing the protective covers of any computer equipment, turn off
power switches and disconnect power cords. Do not reconnect
the power cords until you have replaced the covers.
HARDWARE INSTALLATION
22
• Before opening the system cover, turn off power switches and
unplug the power cords. Do not reconnect the power cords until
you have replaced the covers.
Electrostatic Discharge
Static electricity can cause serious damage to the electronic com-
ponents on this 6Gb/s SAS RAID controller. To avoid damage
caused by electrostatic discharge, observe the following precau-
tions:
• Do not remove the 6Gb/s SAS RAID controller from its anti-stat-
ic packaging until you are ready to install it into a computer case.
• Handle the 6Gb/s SAS RAID controller by its edges or by the
metal mounting brackets at its each end.
• Before you handle the 6Gb/s SAS RAID controller in any way,
touch a grounded, anti-static surface, such as an unpainted por-
tion of the system chassis, for a few seconds to discharge any
built-up static electricity.
2.3 Installation
Use the following instructions below to install a PCIe 2.0 6Gb/s SAS
RAID controller.
Step 1. Unpack
Unpack and remove the PCIe 2.0 6Gb/s SAS RAID controller from
the package. Inspect it carefully, if anything is missing or damaged,
contact your local dealer.
Step 2. Power PC/Server Off
Turn off computer and remove the AC power cord. Remove the
system’s cover. For the instructions, please see the computer sys-
tem documentation.
Step 3. Check Memory Module
Be sure of the cache memory module is present and seated rmly
in the DIMM socket (DDR3-1333) for ARC1882ix-12/16/24 models.
The physical memory conguration for ARC-1882ix series is one
240-pin DDR3-1333 ECC single rank SDRAM DIMM module.
HARDWARE INSTALLATION
23
Figure 2-5, Insert 6Gb/s SAS RAID controller into a PCIe
slot
Step 5. Mount the Drives
You can connect the SAS/SATA drives to the controller through
direct cable and backplane solutions. In the direct connection, SAS/
SATA drives are directly connected to 6Gb/s SAS RAID controller
PHY port with SAS/SATA cables. The 6Gb/s SAS RAID controller
can support up to 28 PHY ports. Remove the front bezel from the
computer chassis and install the cages or SAS/SATA drives in the
computer chassis. Loading drives to the drive tray if cages are
installed. Be sure that the power is connected to either the cage
backplane or the individual drives.
Step 4. Install the PCIe 6Gb/s SAS RAID Cards
To install the 6Gb/s SAS RAID controller, remove the mounting
screw and existing bracket from the rear panel behind the selected
PCIe 2.0 slot. Align the gold-ngered edge on the card with the
selected PCIe 2.0 slot. Press down gently but rmly to ensure that
the card is properly seated in the slot, as shown in Figure 2-5.
Then, screw the bracket into the computer chassis. ARC-1882 se-
ries controllers require a PCIe 2.0 x8 slot.
HARDWARE INSTALLATION
24
In the backplane solution, SAS/SATA drives are directly connected
to 6Gb/s SAS system backplane or through an expander board.
The number of SAS/SATA drives is limited to the number of slots
available on the backplane. Some backplanes support daisy chain
expansion to the next backplanes. The 6Gb/s SAS RAID controller
can support daisy-chain up to 8 enclosures. The maximum drive
no. is 128 devices through 8 enclosures. The following gure shows
how to connect the external Min SAS cable from the 6Gb/s SAS
RAID controller that has external connectors to the external drive
boxes or drive enclosures.
The following table is the max no. of 6Gb/s SAS RAID controller
supported:
Disks/Enclosure Expander Disks/Controller Volume
Max No. 32 8 128 128
Figure 2-6, External connector to a drive box or drive enclosure
Note:
1. The maximum no. is 32 disk drives included in a single
RAID set.
HARDWARE INSTALLATION
25
Figure 2-8, SAS cable connect to backplane
Figure 2-7, SAS cable connect to HD
Step 6. Install SAS Cable
This section describes SAS cable how to connect on controller.
HARDWARE INSTALLATION
26
Step 7. Install the LED Cable (option)
The preferred I/O connector for server backplanes is the inter-
nal SFF-8087 connector. This connector has eight signal pins to
support four SAS/SATA drives and six pins for the SGPIO (Serial
General Purpose Input/Output) side-band signals. The SGPIO bus
is used for efcient LED management and for sensing drive Locate
status. See SFF 8485 for the specication of the SGPIO bus. For
backplane without SGPIO supporting, Please refer to Section 2.4
LED cables for fault/activity LED cable installation.
LED Management: The backplane may contain LEDs to indicate
drive status. Light from the LEDs could be transmitted to the out-
side of the server by using light pipes mounted on the SAS drive
tray. A small microcontroller on the backplane, connected via the
SGPIO bus to a 6Gb/s SAS RAID controller, could control the LEDs.
Activity: blinking 5 times/second and Fault: solid illuminated
Drive Locate Circuitry: The location of a drive may be detected by
sensing the voltage level of one of the pre-charge pins before and
after a drive is installed.
The following signals dene the SGPIO assignments for the Min
SAS 4i internal connector (SFF-8087) in the 6Gb/s SAS RAID con-
troller.
PIN Description PIN Description
SideBand0 SClock (Clock signal) SideBand1 SLoad (Last clock of a bit
stream)
SideBand2 Ground SideBand3 Ground
SideBand4 SDataOut (Serial data
output bit stream)
SideBand5 SDataIn (Serial data input bit
stream)
SideBand6 Reserved SideBand7 Reserved
The SFF-8087 to 4 SATA with sideband cable which follows
SFF-8448 specication. The SFF-8448 sideband signals cable
is reserved for the backplane with header on it.The following
signal denes the sideband connector which can work with Areca
sideband cable on its SFF-8087 to 4 SATA cable.
HARDWARE INSTALLATION
27
Note:
For lastest release versions of drivers, please download from
http://www.areca.com.tw/support/main.htm
The sideband header is located at backplane. For SGPIO to
work properly, please connect Areca 8-pin sideband cable to the
sideband header as shown above. See the table for pin denitions.
Step 8. Adding a Battery Backup Module (optional)
Please refer to Appendix B for installing the BBM in your 6Gb/s SAS
RAID controller.
Step 9. Re-check Fault LED Cable Connections (optional)
Be sure that the proper failed drive channel information is dis-
played by the fault LEDs. An improper connection will tell the user
to ‘‘Hot Swap’’ the wrong drive. This can result in removing the
wrong disk (one that is functioning properly) from the controller.
This can result in failure and loss of system data.
Step 10. Power up the System
Thoroughly check the installation, reinstall the computer cover, and
reconnect the power cord cables. Turn on the power switch at the
rear of the computer (if equipped) and then press the power button
at the front of the host computer.
Step 11. Install the Controller Driver
For a new system:
• Driver installation usually takes places as part of operating sys-
tem installation. Please refer to Chapter 4 Diver Installation for the
detailed installation procedure.
HARDWARE INSTALLATION
28
In an existing system:
• To install the controller driver into the existing operating system.
For the detailed installation procedure, please refer to the Chapter
4, Driver Installation.
Step 12. Install ArcHttp Proxy Server
The 6Gb/s SAS RAID controller rmware has embedded the web-
browser McRAID storage manager. ArcHttp proxy server will launch
the web-browser McRAID storage manager. It provides all of the
creation, management and monitor 6Gb/s SAS RAID controller
status. Please refer to the Chapter 5 for the detail ArcHttp Proxy
Server Installation. For SNMP agent function, please refer to Ap-
pendix C.
Step 13. Congure Volume Set
The controller congures RAID functionality through the McBIOS
RAID manager. Please refer to Chapter 3, McBIOS RAID Manager,
for the detail. The RAID controller can also be congured through
the McRAID storage manager with ArcHttp proxy server installed,
LCD module (refer to LCD manual) or through on-board LAN port.
For this option, please refer to Chapter 6, Web Browser-Based Con-
guration.
Step 14. Determining the Boot Sequences
For PC system:
• 6Gb/s SAS RAID controller is a bootable controller. If your system
already contains a bootable device with an installed operating sys-
tem, you can set up your system to boot a second operating sys-
tem from the new controller. To add a second bootable controller,
you may need to enter setup of motherboard BIOS and change the
device boot sequence so that the 6Gb/s SAS RAID controller heads
the list. If the system BIOS setup does not allow this change, your
system may be not congurable to allow the 6Gb/s SAS RAID con-
troller to act as a second boot device.
HARDWARE INSTALLATION
29
For Apple Mac Pro system:
•The currently Mac OS X 10.X can not directly boot up from 6Gb/s
SAS controllers volume (We do not support the Open Firmware) on
the Power Mac G5 machine and can only use as a secondary stor-
age. All Intel based Mac Pro machines use EFI to boot (not Open
Firmware, which was used for PPC Macs) the system. Areca con-
troller has supported the EFI BIOS on its PCIe 2.0 6Gb/s SAS RAID
controller. You have other alternatively to add volume set on the
Mac Pro bootable device listing. You can follow the following pro-
cedures to add PCIe 2.0 6Gb/s SAS RAID controller on the Mac Pro
bootable device listing.
(1). Upgrade the EFI BIOS from shipping <CD-ROM>\Firmware\
Mac\ directory or from the www.areca.com.tw, if the controllers
default ship with a legacy BIOS for the PC. Please follow the Ap-
pendix A Upgrading Flash ROM Update Process to update the
legacy BIOS to EFI BIOS for Mac Pro to boot up from 6Gb/s SAS
RAID controller’s volume.
(2).Ghost (such as Carbon Copy Cloner ghost utility) the Mac OS X
10.5.x, 10.6.x or 10.7.x system disk on the Mac Pro to the Exter-
nal PCIe 2.0 6Gb/s SAS RAID controller volume set. Carbon Copy
Cloner is an archival type of back up software. You can take your
whole Mac OS X system and make a carbon copy or clone to Areca
volume set like an other hard drive.
(3). Power up the Mac Pro machine, it will take about 30 seconds
for controller rmware ready. This periodic will let the boot up
screen blank before Areca volume in the bootable device list.
2.4 SAS Cables
You can connect the end devices to each other through direct
cables or through the SAS expander/backplane connections. The
6Gb/s SAS RAID controller supports daisy-chain expansion up to 8
enclosures. The following is an example of some internal SAS/SATA
cables and an external SAS cable.
2.4.1 Internal Min SAS 4i to SATA Cable
The Min SAS 4i to SATA cables are used for connection between
the 6Gb/s SAS RAID controller internal connectors and connectors
on the SAS/SATA disk drives or SAS/SATA connector backplane.
The 6Gb/s SAS controllers has 1-6 Min SAS 4i (SFF-8087) inter-
HARDWARE INSTALLATION
30
Figure 2-9, Internal Min SAS 4i to 4x SATA cable
nal connectors, each of them can support up to four SAS/SATA
drives.
These controllers can be installed in a server RAID enclosure
with standard SATA connectors backplane. The following diagram
shows the picture of Min SAS 4i to 4*SATA cables. Backplane
supports SGPIO header can leverage the SGPIO function on the
6Gb/s SAS RAID controller through the sideband cable.
The SFF-8448 sideband signals cable is reserved for the back-
plane with header on it.
2.4.2 Internal Min SAS 4i to 4xSFF-8482 Cable
These controllers can be installed in a server RAID enclosure with
out a backplane. The kind of cable will attach directly to the SAS
disk drives. The following diagram shows the picture of Min SAS
4i (SFF-8087) to 4xSFF-8482 cables.
HARDWARE INSTALLATION
31
Figure 2-10, Min SAS 4i to 4xSFF-8482 cable
2.4.3 Internal Min SAS 4i (SFF-8087) to Internal
Min SAS 4i (SFF-8087) cable
The 6Gb/s SAS RAID controllers have 1-6 Min SAS 4i internal
SFF-8087 connectors, each of them can support up to four SAS/
SATA signals. These controllers can be installed in a server RAID
enclosure with Min SAS 4i internal connectors backplane. This
Min SAS 4i cable has eight signal pins to support four SAS/SATA
drives and six pins for the SGPIO (Serial General Purpose Input/
Output) side-band signals. The SGPIO bus is used for efcient
LED management and for sensing drive Locate status.
Figure 2-11, Min SAS 4i to Min SAS 4i cable
HARDWARE INSTALLATION
32
Figure 2-12, Min SAS 4x to Min SAS 4x cable
2.4.4 External Min SAS 4x Drive Boxes and Drive
Expanders
The Min SAS 4x external cables are used for connection between
the 6Gb/s SAS controller external connectors and connectors on
the external drive boxes or drive expanders (JBOD). The 6Gb/s
SAS controller has Min SAS 4x (SFF-8088) external connector,
each of them can support up to four SAS/SATA signals.
2.5 LED Cables
There is no SGPIO supported in the most of old version SATA back-
plane. The 6Gb/s SAS controller also provides two kinds of alterna-
tive LED cable header to support the fault/activity status for those
backplanes. The global indicator connector is used by the server
global indicator LED.
The following electronics schematic is the 6Gb/s SAS RAID control-
ler logical of fault/activity header. The signal for each pin is cathode
(-) side.
The following diagrams and descriptions describe each type of con-
nector.
HARDWARE INSTALLATION
33
Note:
A cable for the global indicator comes with your computer
system. Cables for the individual drive LEDs may come with a
drive cage, or you may need to purchase them.
LED Normal Status Problem Indication
Fault LED When the fault LED is solid
illuminated, there is no disk
present. When the fault LED
is off, then disk is present
and status is normal.
When the fault LED is slow blinking
(2 times/sec), that disk drive has
failed and should be hot-swapped
immediately. When the activity
LED is illuminated and fault LED is
fast blinking (10 times/sec) there
is rebuilding activity on that disk
drive.
A: Individual Activity/Fault LED and Global Indicator Con-
nector
Most of the backplane has supported the HDD activity from the
HDD. The 6Gb/s SAS RAID controller also provides the fault activity
for fault LED. Connect the cables for the drive fault LEDs between
the backplane of the cage and the respective connector on the
6Gb/s SAS RAID controller.
The following table is the fault LED signal behavior.
HARDWARE INSTALLATION
34
Figure 2-13, ARC-
1882ix-12/16/24 individual
LED for each channel drive and
global indicator connector for
computer case.
If the system will use only a single global indicator, attach the
LED to the two pins of the global activity/cache write-pending
connector. The global fault pin pair connector is the overall fault
signal. This signal will light up in any disk drive failure.
The following diagrams show all LEDs, connectors and pin
locations.
Figure 2-14, ARC-1882i
individual LED for each channel
drive and global indicator
connector for computer case.
HARDWARE INSTALLATION
35
Figure 2-15, ARC-1882LP
individual LED for each channel
drive and global indicator
connector for computer case.
Figure 2-16, ARC-1882x
individual LED for each channel
drive and global indicator
connector for computer case.
Figure 2-18, Activity/Fault LED serial bus connector connected
between 6Gb/s SAS RAID controller & 4 SATA HDD backplane.
B: Areca Serial Bus Connector
You can also connect the Areca interface to a proprietary SAS/
SATA backplane enclosure. This can reduce the number of activity
LED and/or fault LED cables. The I2C interface can also cascade to
another SAS/SATA backplane enclosure for the additional channel
status display.
HARDWARE INSTALLATION
36
PIN Description PIN Description
1 Power (+5V) 2 GND
3 LCD Module Interrupt 4 Protect Key
5 LCD Module Serial Data 6 Fault/Activity Clock
7 Fault/Activity Serial Data 8 LCD Module Clock
The following picture and table is the serial bus signal name de-
scription for LCD & fault/activity LED.
Areca serial bus also supports SES (SCSI Enclosure Services) over
I2C over internal I2C backplane cable. The backplane cable can
connect the I2C signal from Areca controller to the backplane using
IPMI-style I2C 3-pin connector. It means you link I2C cable into back
plane, and let back plane LED indicate HardDisk fail status.
2.5 Hot-plug Drive Replacement
The RAID controller supports the ability of performing a hot-swap
drive replacement without powering down the system. A disk can
be disconnected, removed, or replaced with a different disk without
taking the system off-line. The RAID rebuilding will be processed
automatically in the background. When a disk is hot swap, the
RAID controller may no longer be fault tolerant. Fault tolerance will
be lost until the hot swap drive is subsequently replaced and the
rebuild operation is completed.
2.5.1 Recognizing a Drive Failure
A drive failure can be identied in one of the following ways:
1). An error status message lists failed drives in the event log.
2). A fault LED illuminates on the front of RAID subsystem if
failed drives are inside.
HARDWARE INSTALLATION
37
Note:
The capacity of the replacement drives must be at least
as large as the capacity of the other drives in the raid set.
Drives of insufcient capacity will be failed immediately by
the RAID controller without starting the “Automatic Data
Rebuild”.
2.5.2 Replacing a Failed Drive
With RAID subsystem drive tray, you can replace a defective
physical drive while your computer is still operating. When a new
drive has been installed, data reconstruction will be automatically
started to rebuild the contents of the disk drive. The controller
always uses the smallest hotspare that “ts”. If a hotspare is used
and the defective drive is exchanged on-line, the new inserted
HDD will automatically assign as a hotsapre HDD.
2.6 Summary of the installation
The ow chart below describes the installation procedures for 6Gb/
s SAS RAID controllers. These procedures includes hardware instal-
lation, the creation and conguration of a RAID volume through the
McBIOS/McRAID manager, OS installation and installation of 6Gb/s
SAS RAID controller software.
The software components congure and monitor the 6Gb/s SAS
RAID controllers as following table.
Conguration Utility Operating System Supported
McBIOS RAID Manager OS-Independent
McRAID Storage Manager
(Via Archttp proxy server)
Windows 7/2008/Vista/XP/2003, Linux,
FreeBSD, Solaris and Mac
SAP Monitor (Single Admin Portal to
scan for multiple RAID units in the net-
work, via ArcHttp proxy server)
Windows 7/2008/Vista/XP/2003
SNMP Manager Console Integration Windows 7/2008/Vista/XP/2003, Linux
and FreeBSD
HARDWARE INSTALLATION
38
McRAID Storage Manager
Before launching the rmware-embedded web server, McRAID
storage manager through the PCIe bus, you need rst to install the
ArcHttp proxy server on your server system. If you need additional
information about installation and start-up of this function, see the
McRAID Storage Manager section in Chapter 6.
SNMP Manager Console Integration
There are two ways to transport SNMP data on the 6Gb/s SAS RAID
controller:In-Band PCIe host bus interface or Out-of-Band built-
in LAN interface. Enter the “SNMP Tarp IP Address“ option on the
rmware-embedded SNMP conguration function for user to select
the SNMP data agent-side communication from the Out-of-Band
built-in LAN interface. To use In-Band PCIe host bus interface, keep
blank on the “SNMP Tarp IP Address“ option.
• Out of Band-Using LAN Port Interface
Out-of-band interface refers to transport SNMP data of 6Gb/s
SAS controllers from a remote station connected to the control-
ler through a network cable. Before launching the SNMP manager
on clinet, you need rst to enable the rmware-embedded SNMP
agent function and no additional agent software inquired on
your server system. If you need additional information about
installation and start-up this function, see the section 6.8.4 SNMP
Conguration.
• In-Band-Using PCIe Host Bus Interface
In-band interface refers to management of the SNMP data of
6Gb/s SAS controllers from a PCIe host bus. In-band interface is
simpler than out-of-band interface for it requires less hardware
HARDWARE INSTALLATION
39
in its conguration.Since the SAS controller is already installed in
the host system, no extra connection is necessary. Just load the
necessary in-band Areca SNMP extension agent for the control-
lers.
Before launching the SNMP agent in the sever, you need rst to
enable the rmware-embedded SNMP community conguration
and install Areca SNMP extension agent in your server system.
If you need additional information about installation and start-up
the function, see the SNMP Operation & Installation section in the
Appendix C.
Single Admin Portal (SAP) Monitor
This utility can scan for multiple RAID units on the network and
monitor the controller set status. For additional information, see
the utility manual (SAP) in the packaged CD or download it from
the web site .http://www.areca.com.tw
BIOS CONFIGURATION
40
3. McBIOS RAID Manager
The system mainboard BIOS automatically congures the following
6Gb/s SAS RAID controller parameters at power-up:
• I/O Port Address
• Interrupt Channel (IRQ)
• Controller ROM Base Address
Use McBIOS RAID manager to further congure the 6Gb/s SAS RAID
controller to suit your server hardware and operating system.
3.1 Starting the McBIOS RAID Manager
This section explains how to use the McBIOS RAID manager to
congure your RAID system. The McBIOS RAID manager is de-
signed to be user-friendly. It is a menu-driven program, residing
in the rmware, which allows you to scroll through various menus
and sub-menus and select among the predetermined conguration
options.
When starting a system with a 6Gb/s SAS RAID controller in-
stalled, it will display the following message on the monitor during
the start-up sequence (after the system BIOS startup screen but
before the operating system boots):
The McBIOS RAID manager message remains on your screen for
about nine seconds, giving you time to start the conguration
menu by pressing or . If you do not wish to enter congu-Tab F6
ration menu, press to skip conguration immediately. When ESC
activated, the McBIOS RAID manager window appears showing a
selection dialog box listing the 6Gb/s SAS RAID controllers that are
installed in the system.
The legend at the bottom of the screen shows you what keys are
enabled for the windows.
Bus/Dev/Fun= 4/0/0, I/0-Port=28000000h, IRQ=11, BIOS=C800 : 0h
ID-LUN=00-0, Vol=”Areca ARC-1882-VOL#000R001”, Size=3.6 (TB)
ID-LUN=00-1, Vol=”Areca ARC-1882-VOL#001R001”, Size=3.6 (TB)
ID-LUN=00-2, Vol=”Areca ARC-1882-VOL#002R001”, Size=3.6 (TB)
RAID controller BIOS not installed
Press <Tab/F6> to enter SETUP menu. 9 second(s) left <ESC to Skip>..
ARC-1882 PCIEx8/2.5G RAID Controller - DRAM: 1024(MB) / #Channels: 8
BIOS: V1.22d / Date: 2010-11-16 - F/W: V1.49 / Date: 2011-05-31
BIOS CONFIGURATION
41
Areca Technology Corporation RAID Setup <V1.40, 2006/08/8>
ArrowKey Or AZ:Move Cursor, Enter: Select, **** Press F10 (Tab) to Reboot ****
Select An Adapter To Congure
( 001/ 0/0) I/O=28000000h, IRQ = 9
Use the and arrow keys to select the controller you want Up Down
to congure. While the desired controller is highlighted, press the
Enter key to enter the main menu of the McBIOS RAID manager.
3.2 McBIOS RAID manager
The McBIOS RAID manager is rmware-based and is used to con-
gure RAID sets and volume sets. Because the utility resides in the
6Gb/s SAS RAID controller rmware, operation is independent of
any operating systems on your computer. This utility can be used
to:
• Create RAID sets,
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Quick Volume/Raid Setup
Verify Password
Note:
T h e m a n u f a c t u re
default password is
set to ; this 0000
p a s s w o r d c a n b e
modied by selecting
Change Password
in the Raid System
Function section.
BIOS CONFIGURATION
42
3.4 Designating Drives as Hot Spares
Any unused disk drive that is not part of a RAID set can be desig-
nated as a hot spare. The “Quick Volume/Raid Setup” conguration
will add the spare disk drive and automatically display the appro-
priate RAID level from which the user can select. For the “Raid Set
Function” conguration option, the user can use the “Create Hot
Spare” option to dene the hot spare disk drive.
When a hot spare disk drive is being created using the “Create Hot
Spare” option (in the “Raid Set Function”), all unused physical de-
vices connected to the current controller appear:
Choose the target disk by selecting the appropriate check box.
Press key to select a disk drive, and press Enter Yes in the “Cre-
ate Hot Spare” to designate it as a hot spare.
• Expand RAID sets,
• Add physical drives,
• Dene volume sets,
• Modify volume sets,
• Modify RAID level/stripe size,
• Dene pass-through disk drives,
• Modify system functions and
• Designate drives as hot spares.
3.3 Conguring Raid Sets and Volume Sets
You can congure RAID sets and volume sets with McBIOS RAID
manager automatically. Using “Quick Volume/Raid Setup” or manu-
ally using “Raid Set/Volume Set Function”. Each conguration
method requires a different level of user input. The general ow of
operations for RAID set and volume set conguration is:
Step Action
1 Designate hot spares/pass-through drives (optional).
2 Choose a conguration method.
3 Create RAID sets using the available physical drives.
4 Dene volume sets using the space available in the RAID set.
5 Initialize the volume sets and use volume sets (as logical drives) in the
host OS.
BIOS CONFIGURATION
43
3.5 Using Quick Volume /Raid Setup Con-
guration
“Quick Volume / Raid Setup conguration” collects all available
drives and includes them in a RAID set. The RAID set you created
is associated with exactly one volume set. You will only be able to
modify the default RAID level, stripe size and capacity of the new
volume set. Designating drives as hot spares is also possible in the
“Raid Level” selection option. The volume set default settings will
be:
Parameter Setting
Volume Name ARC-1882-VOL#00
SCSI Channel/SCSI ID/SCSI LUN 0/0/0
Cache Mode Write-Back
Tag Queuing Yes
The default setting values can be changed after conguration is
completed. Follow the steps below to create arrays using the “Raid
Set / Volume Set” method:
Step Action
1 Choose “Quick Volume /Raid Setup” from the main menu. The available
RAID levels with hot spare for the current volume set drive are displayed.
2 It is recommended that you use drives of the same capacity in a specic
array. If you use drives with different capacities in an array, all drives in
the RAID set will be set to the capacity of the smallest drive in the RAID
set.
The numbers of physical drives in a specic array determines which RAID
levels that can be implemented in the array.
RAID 0 requires 1 or more physical drives.
RAID 1 requires at least 2 physical drives.
RAID 10(1E) requires at least 3 physical drives.
RAID 3 requires at least 3 physical drives.
RAID 5 requires at least 3 physical drives.
RAID 3 +Spare requires at least 4 physical drives.
RAID 5 + Spare requires at least 4 physical drives.
RAID 6 requires at least 4 physical drives.
RAID 6 + Spare requires at least 5 physical drives.
Highlight the desired RAID level for the volume set and press the Enter
key to conrm.
BIOS CONFIGURATION
44
3.6 Using Raid Set/Volume Set Function
Method
In “Raid Set Function”, you can use the “Create Raid Set” function
to generate a new RAID set. In “Volume Set Function”, you can
use the “Create Volume Set” function to generate an associated
volume set and conguration parameters.
If the current controller has unused physical devices connected,
you can choose the “Create Hot Spare” option in the “Raid Set
Function” to dene a global hot spare. Select this method to con-
gure new RAID sets and volume sets. The “Raid Set/Volume Set
Function” conguration option allows you to associate volume sets
with partial and full RAID sets.
3 The capacity for the current volume set is entered after highlighting the
desired RAID level and pressing the key.Enter
The capacity for the current volume set is displayed. Use the and UP
DOWN arrow keys to set the capacity of the volume set and press the
Enter key to conrm. The available stripe sizes for the current volume
set are then displayed.
4 Use the and arrow keys to select the current volume set UP DOWN
stripe size and press the key to conrm. This parameter species Enter
the size of the stripes written to each disk in a RAID 0, 1, 10(1E), 5 or
6 volume set. You can set the stripe size to 4 KB, 8 KB, 16 KB, 32 KB,
64 KB, or 128 KB. A larger stripe size provides better read performance,
especially when the computer preforms mostly sequential reads. How-
ever, if the computer preforms random read requests more often, choose
a smaller stripe size.
5 When you are nished dening the volume set, press the key to Yes
conrm the “Quick Volume And Raid Set Setup” function.
6 Foreground (Fast Completion) Press key to dene fast initialization Enter
or selected the Background (Instant Available) or No Init (To Rescue Vol-
ume). In the “Background Initialization”, the initialization proceeds as a
background task, the volume set is fully accessible for system reads and
writes. The operating system can instantly access to the newly created
arrays without requiring a reboot and waiting the initialization complete.
In “Foreground Initialization”, the initialization proceeds must be com-
pleted before the volume set ready for system accesses. In “No Init”,
there is no initialization on this volume.
7 Initialize the volume set you have just congured
8 If you need to add additional volume set, using main menu “Create Vol-
ume Set” function.
BIOS CONFIGURATION
45
Step Action
1 To setup the hot spare (option), choose “Raid Set Function” from the
main menu. Select the “Create Hot Spare” and press the key to Enter
dene the hot spare.
2 Choose “Raid Set Function” from the main menu. Select “Create Raid
Set” and press the key.Enter
3 The “Select a Drive For Raid Set” window is displayed showing the SAS/
SATA drives connected to the 6Gb/s SAS RAID controller.
4 Press the and arrow keys to select specic physical drives. UP DOWN
Press the key to associate the selected physical drive with the cur-Enter
rent RAID set.
It is recommended that you use drives of the same capacity in a specic
array. If you use drives with different capacities in an array, all drives in
the RAID set will be set to the capacity of the smallest drive in the RAID
set.
The numbers of physical drives in a specic array determines which RAID
levels that can be implemented in the array.
RAID 0 requires 1 or more physical drives.
RAID 1 requires at least 2 physical drives.
RAID 10(1E) requires at least 3 physical drives.
RAID 3 requires at least 3 physical drives.
RAID 5 requires at least 3 physical drives.
RAID 6 requires at least 4 physical drives.
RAID 30 requires at least 6 physical drives.
RAID 50 requires at least 6 physical drives.
RAID 60 requires at least 8 physical drives.
5 After adding the desired physical drives to the current RAID set, press
the to conrm the “Create Raid Set” function.Enter
6 An “Edit The Raid Set Name” dialog box appears. Enter 1 to 15 alphanu-
meric characters to dene a unique identier for this new RAID set. The
default RAID set name will always appear as Raid Set. #. Press Enter
key to nish the name editing.
7 Press the key when you are nished creating the current RAID Enter
set. To continue dening another RAID set, repeat step 3. To begin vol-
ume set conguration, go to step 8.
8 Choose the “Volume Set Function” from the main menu. Select “Create
Volume Set” and press the key.Enter
9 Choose a RAID set from the “Create Volume From Raid Set” window.
Press the key to conrm the selection.Yes
BIOS CONFIGURATION
46
3.7 Main Menu
The main menu shows all functions that are available for executing
actions, which is accomplished by clicking on the appropriate link.
Note:
The manufacture default password is set to ; this 0000
password can be modied by selecting “Change Password”
in the “Raid System Function section.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Quick Volume/Raid Setup
Verify Password
10 Choosing Foreground (Fast Completion) Press key to dene fast Enter
initialization or selected the Background (Instant Available) or No Init
(To Rescue Volume). In the “Background Initialization”, the initialization
proceeds as a background task, the volume set is fully accessible for
system reads and writes. The operating system can instantly access
to the newly created arrays without requiring a reboot and waiting the
initialization complete. In “Foreground Initialization”, the initialization
proceeds must be completed before the volume set ready for system
accesses. In “No Init”, there is no initialization on this volume.
11 If space remains in the RAID set, the next volume set can be congured.
Repeat steps 8 to 10 to congure another volume set.
BIOS CONFIGURATION
47
Option Description
Quick Volume/Raid Setup Create a default conguration based on the number of
physical disk installed
Raid Set Function Create a customized RAID set
Volume Set Function Create a customized volume set
Physical Drives View individual disk information
Raid System Function Setup the RAID system conguration
Hdd Power Management Manage HDD power based on usage patterns
Ethernet Conguration LAN port setting
View System Events Record all system events in the buffer
Clear Event Buffer Clear all information in the event buffer
Hardware Monitor Show the hardware system environment status
System Information View the controller system information
This password option allows user to set or clear the RAID control-
ler’s password protection feature. Once the password has been
set, the user can only monitor and congure the RAID controller
by providing the correct password. The password is used to protect
the internal RAID controller from unauthorized entry. The control-
ler will prompt for the password only when entering the main menu
from the initial screen. The RAID controller will automatically return
to the initial screen when it does not receive any command in ve
minutes.
3.7.1 Quick Volume/Raid Setup
“Quick Volume/Raid Setup” is the fastest way to prepare a RAID
set and volume set. It requires only a few keystrokes to com-
plete. Although disk drives of different capacity may be used in
the RAID set, it will use the capacity of the smallest disk drive as
the capacity of all disk drives in the RAID set. The “Quick Vol-
ume/Raid Setup” option creates a RAID set with the following
properties:
1). All of the physical drives are contained in one RAID set.
2). The RAID level, hot spare, capacity, and stripe size options
are selected during the conguration process.
3). When a single volume set is created, it can consume all or a
portion of the available disk capacity in this RAID set.
BIOS CONFIGURATION
48
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Quick Volume/Raid Setup
Total 5 Drives
Raid 0
Raid 1 + 0
Raid 1 + 0 + Spare
Raid 3
Raid 5
Raid 3 + Spare
Raid 5 + Spare
Raid 6
Raid 6 + Spare
If volume capacity will exceed 2TB, controller will show the
“Greater Two TB Volume Support” sub-menu.
No
It keeps the volume size with max. 2TB limitation.
Use 64bit LBA
This option use 16 bytes CDB instead of 10 bytes. The maximum
volume capacity up to 512TB.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Quick Volume/Raid Setup
Total 5 Drives
Raid 0
Raid 1 + 0
Raid 1 + 0 + Spare
Raid 3
Raid 5
Raid 3 + Spare
Raid 5 + Spare
Raid 6
Raid 6 + Spare
Raid 0
Greater Two TB Volume Support
No
Use 64bit LBA
Use 4K Block
No
4). If you need to add an additional volume set, use the main
menu “Create Volume Set” function.
The total number of physical drives in a specic RAID set deter-
mine the RAID levels that can be implemented within the RAID
set. Select “Quick Volume/Raid Setup” from the main menu; all
possible RAID level will be displayed on the screen.
BIOS CONFIGURATION
49
This option works on different OS which supports 16 bytes CDB.
Such as:
Windows 2003 with SP1 or later
Linux kernel 2.6.x or later
• Use 4K Block
It change the sector size from default 512 bytes to 4k bytes. The
maximum volume capacity up to 16TB. This option works under
Windows platform only. And it can not be converted to “Dynamic
Disk”, because 4k sector size is not a standard format.
For more details, please download pdf le from ftp://ftp.
areca.com.tw/RaidCards/Documents/Manual_Spec/
Over2TB_050721.zip
A single volume set is created and consumes all or a portion of
the disk capacity available in this RAID set. Dene the capacity of
volume set in the “Available Capacity” popup. The default value
for the volume set, which is 100% of the available capacity, is
displayed in the selected capacity. use the UP DOWN and arrow
key to set capacity of the volume set and press key to ac-Enter
cept this value. If the volume set uses only part of the RAID set
capacity, you can use the “Create Volume Set” option in the main
menu to dene additional volume sets.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Quick Volume/Raid Setup
Total 5 Drives
Raid 0
Raid 1 + 0
Raid 1 + 0 + Spare
Raid 3
Raid 5
Raid 3 + Spare
Raid 5 + Spare
Raid 6
Raid 6 +Spare
Raid 6
Available Capacity : 2400.0GB
Selected Capacity: 2400.0GB
Stripe Size This parameter sets the size of the stripe written to
each disk in a RAID 0, 1, 1E, 10, 5, or 6 logical drive. You can set
the stripe size to 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, or 128 KB.
BIOS CONFIGURATION
50
A larger stripe size produces better-read performance, especially
if your computer does mostly sequential reads. However, if you
are sure that your computer performs random reads more often,
select a smaller stripe size.
Press key in the “Create Vol/Raid Set” dialog box, the RAID Yes
set and volume set will start to initialize it.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Quick Volume/Raid Setup
Total 5 Drives
Raid 0
Raid 1 + 0
Raid 1 + 0 + Spare
Raid 3
Raid 5
Raid 3 + Spare
Raid 5 + Spare
Raid 6
Raid 6 +Spare
Raid 6
Available Capacity : 2400.0GB
Selected Capacity: 2400.0GB
Select Strip Size
4K
8K
16K
32K
128K
64K
Create Vol/Raid Set
No
Yes
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Quick Volume/Raid Setup
Total 5 Drives
Raid 0
Raid 1 + 0
Raid 1 + 0 + Spare
Raid 3
Raid 5
Raid 3 + Spare
Raid 5 + Spare
Raid 6
Raid 6 +Spare
Raid 6
Available Capacity : 2400.0GB
Selected Capacity: 2400.0GB
Select Strip Size
4K
8K
16K
32K
128K
256K
512K
1M
64K
BIOS CONFIGURATION
51
3.7.2 Raid Set Function
Manual conguration gives complete control of the RAID set set-
ting, but it will take longer to congure than “Quick Volume/Raid
Setup” conguration. Select “Raid Set Function” to manually con-
gure the RAID set for the rst time or delete existing RAID sets
and recongure the RAID set.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Quick Volume/Raid Setup
Total 5 Drives
Raid 0
Raid 1 + 0
Raid 1 + 0 + Spare
Raid 3
Raid 5
Raid 3 + Spare
Raid 5 + Spare
Raid 6
Raid 6 +Spare
Raid 6
Available Capacity : 2400.0GB
Selected Capacity: 2400.0GB
Select Strip Size
4K
8K
16K
32K
128K
64K
Initialization Mode
Background (Instant Available)
No Init (To Rescue Volume)
Foreground (Faster Completeion)
Select “Foreground (Faster Completion)” or “Background (Instant
Available)” for initialization and “No Init (To Rescue Volume)” for
recovering the missing RAID set conguration.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Raid Set Function
BIOS CONFIGURATION
52
3.7.2.1 Create Raid Set
The following is the RAID set features for the 6Gb/s SAS RAID
controller.
1. Up to 32 disk drives can be included in a single RAID set.
2. Up to 128 RAID sets can be created per controller, but RAID
level 30 50 and 60 only can support eight sub-volumes (RAID
set).
To dene a RAID set, follow the procedures below:
1). Select “Raid Set Function” from the main menu.
2). Select “Create Raid Set “ from the “Raid Set Function” dialog
box.
3. A “Select IDE Drive For Raid Set” window is displayed show-
ing the SAS/SATA drives connected to the current controller.
Press the and arrow keys to select specic physical UP DOWN
drives. Press the key to associate the selected physical Enter
drive with the current RAID set. Repeat this step; the user can
add as many disk drives as are available to a single RAID set.
When nished selecting SAS/SATA drives for RAID set, press
Esc key. A “Create Raid Set Conrmation” screen will appear,
select the option to conrm it. Yes
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Raid Set Function
Raid Set Function
Delete Raid Set
Expand Raid Set
Ofine Raid Set
Activate Raid Set
Create Hot Spare
Delete Hot Spare
Rescue Raid Set
Raid Set Information
Create Raid Set
Select IDE Drives For Raid Set
[ ]E#1Solt#2 : 500.1GB : HDS725050KLA360
[ ]E#1Solt#3 : 500.1GB : ST3500630NS
[ ]E#1Solt#4 : 400.1GB : Hitachi HDT725040VLA360
[ ]E#1Solt#5 : 400.1GB : Hitachi HDT725040VLA360
[ ]E#1Solt#6 : 400.1GB : Hitachi HDT725040VLA360
[ ]E#1Solt#7 : 500.1GB : HDS725050KL360
[ ]E#1Solt#8 : 500.1GB : ST3500630NS
[*]E#1Solt#1 : 400.1GB : Hitachi HDT725040VLA360
4. An “Edit The Raid Set Name” dialog box appears. Enter 1 to
15 alphanumeric characters to dene a unique identier for the
RAID set. The default RAID set name will always appear as Raid
Set. #.
5. Repeat steps 3 to dene another RAID sets.
BIOS CONFIGURATION
53
3.7.2.2 Delete Raid Set
To completely erase and recongure a RAID set, you must rst
delete it and re-create the RAID set. To delete a RAID set, select
the RAID set number that you want to delete in the “Select Raid
Set To Delete” screen. Then “Delete Raid Set” dialog box will ap-
pear, press the to delete it. Warning, data on RAID set will Yes
be lost if this option is used. But for deleting RAID set with the
Raid 30/50/60 volume, rstly, you need to delete the volumes
belonging those RAID sets.
Note:
To create RAID 30/50/60 volume, you need create multiple
RAID sets (up to 8 RAID sets) rst with the same disk
members on each RAID set. The max no. disk drives per
volume set:
32 for RAID 0/1/10/3/5/6 and 128 for RAID 30/50/60.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Raid Set Function
Raid Set Function
Delete Raid Set
Expand Raid Set
Ofine Raid Set
Activate Raid Set
Create Hot Spare
Delete Hot Spare
Rescue Raid Set
Raid Set Information
Create Raid Set
Select IDE Drives For Raid Set
[ ]E#1Solt#2 : 500.1GB : HDS725050KLA360
[ ]E#1Solt#3 : 500.1GB : ST3500630NS
[ ]E#1Solt#4 : 400.1GB : Hitachi HDT725040VLA360
[ ]E#1Solt#5 : 400.1GB : Hitachi HDT725040VLA360
[ ]E#1Solt#6 : 400.1GB : Hitachi HDT725040VLA360
[ ]E#1Solt#7 : 500.1GB : HDS725050KL360
[ ]E#1Solt#8 : 500.1GB : ST3500630NS
[*]E#1Solt#1 : 400.1GB : Hitachi HDT725040VLA360
Edit The Raid Set Name
aid Set # 000R
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Raid Set FunctionRaid Set Function
Delete Raid Set
Expand Raid Set
Ofine Raid Set
Activate Raid Set
Create Hot Spare
Delete Hot Spare
Rescue Raid Set
Raid Set Information
Create Raid Set
Select Raid Set to Delete
Raid Set # 000 :3/3 Disk: Normal
Raid Set # 001 :9/9 Disk: Normal
Raid Set # 003 :8/8 Disk: Normal
Raid Set # 004 :3/3 Disk: Normal
Raid Set # 005 :3/3 Disk: Normal
Raid Set # 006 :3/3 Disk: Normal
Raid Set # 007 :3/3 Disk: Normal
Raid Set # 000 :3/3 Disk: Normal
Select Raid Set To Delete
Raid Set # 000
Are you Sure?
No
Yes
BIOS CONFIGURATION
54
3.7.2.3 Expand Raid Set
Instead of deleting a RAID set and recreating it with additional
disk drives, the “Expand Raid Set” function allows the users to
add disk drives to the RAID set that have already been created.
To expand a RAID set:
Select the “Expand Raid Set” option. If there is an available
disk, then the “Select SAS/SATA Drives For Raid Set Expansion”
screen appears.
Select the target RAID set by clicking on the appropriate radius
button. Select the target disk by clicking on the appropriate
check box.
Press the key to start the expansion on the RAID set.Yes
The new additional capacity can be utilized by one or more
volume sets. The volume sets associated with this RAID set
appear for you to have chance to modify RAID level or stripe
size. Follow the instruction presented in the “Modify Volume Set
” to modify the volume sets; operation system specic utilities
may be required to expand operating system partitions.
Note:
1. Once the “Expand Raid Set” process has started, user
can not stop it. The process must be completed.
2. If a disk drive fails during RAID set expansion and a hot
spare is available, an auto rebuild operation will occur after
the RAID set expansion completes.
3. RAID 30/50/60 doesn't support the "Expand Raid Set".
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Raid Set Function
Raid Set Function
Create Raid Set
Delete Raid Set
Expand Raid Set
Ofine Raid Set
Activate Raid Set
Create Hot Spare
Delete Hot Spare
Rescue Raid Set
Raid Set Information
Expand Raid Set
Select IDE Drives For Raid Set Expansion
Raid Set # 000 :3/3 Disk: Normal
Raid Set # 001 :9/9 Disk: Normal
Raid Set # 003 :8/8 Disk: Normal
Raid Set # 004 :3/3 Disk: Normal
Raid Set # 005 :3/3 Disk: Normal
Raid Set # 006 :3/3 Disk: Normal
Raid Set # 007 :3/3 Disk: Normal
Raid Set # 000 :3/3 Disk: Normal
Select Raid Set To Expansion
Raid Set # 000
Are you Sure?
No
Yes
BIOS CONFIGURATION
55
Migrating
Migration occurs when a disk is added to a RAID set. Migrating
state is displayed in the RAID state area of “The Raid Set
Information” screen when a disk is being added to a RAID set.
Migrating state is also displayed in the associated volume state
area of the “Volume Set Information” which belongs this RAID
set.
3.7.2.4 Ofine Raid Set
This function is for customer being able to unmount and re-
mount a multi-disk volume. All Hdds of the selected RAID set
will be put into ofine state and spun down and fault LED will be
in fast blinking mode.
Note:
4. RAID set expansion is a quite critical process, we
strongly recommend customer backup data before expand.
Unexpected accident may cause serious data corruption.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Raid Set Function
Raid Set Function
Create Raid Set
Delete Raid Set
Expand Raid Set
Ofine Raid Set
Activate Raid Set
Create Hot Spare
Delete Hot Spare
Rescue Raid Set
Raid Set Information
Raid Set information
The Raid Set Information
Raid Set Name : Raid Set # 00
Member Disks : 2
Raid State : Migrating
Total Capacity : 800.0GB
Free Capacity : 800.0GB
Min Member Disk Size : 400.0GB
Member Disk Channels : .E1S1.E1S2.
BIOS CONFIGURATION
56
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Raid Set Function
Raid Set Function
Create Raid Set
Delete Raid Set
Expand Raid Set
Ofine Raid Set
Activate Raid Set
Create Hot Spare
Delete Hot Spare
Rescue Raid Set
Raid Set Information
Activate Raid Set
Select Raid Set To Activate
Raid Set # 000
3.7.2.5 Activate Incomplete Raid Set
The following screen is used to activate the RAID set after one
of its disk drive was removed in the power off state.
When one of the disk drives is removed in power off state, the
RAID set state will change to “Incomplete State”. If user wants
to continue to work while the 6Gb/s SAS RAID controller is pow-
ered on, the user can use the “Activate Incomplete Raid Set” op-
tion to active the RAID set. After user selects this function, the
RAID state will change to “Degraded Mode” and start to work.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Raid Set Function
Raid Set Function
Create Raid Set
Delete Raid Set
Expand Raid Set
Ofine Raid Set
Activate Raid Set
Create Hot Spare
Delete Hot Spare
Rescue Raid Set
Raid Set Information
Ofine Raid Set
Select Raid Set To Ofine
Raid Set # 000
Ofine Raid Set
es
No
Yes
Are you Sure?
No
Yes
BIOS CONFIGURATION
57
3.7.2.6 Create Hot Spare
When you choose the “Create Hot Spare” option in the “Raid Set
Function”, all unused physical devices connected to the current
controller will result in the screen.
Select the target disk by clicking on the appropriate check box.
Press the key to select a disk drive and press Enter Yes in the
“Create Hot Spare” to designate it as a hot spare.
The “Create Hot Spare” gives you the ability to dene a global
or dedicated hot spare. Unlike “Global Hot Spare” which can be
used with any RAID sets, “Dedicated Hot Spare” can only be
used with a specic RAID set or Enclosure. When a disk drive
fails in the RAID set or enclosure with a dedicated hot spare is
pre-set, data on the disk drive is rebuild automatically on the
dedicated hot spare disk.
3.7.2.7 Delete Hot Spare
Select the target hot spare disk to delete by clicking on the ap-
propriate check box.
Press the key to select a hot spare disk drive, and press Enter
Yes in the “Delete Hot Spare” screen to delete the hot spare.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Raid Set Function
Raid Set Function
Raid Set
Delete Raid Set
Expand Raid Set
Ofine Raid Set
Activate Raid Set
Create Hot Spare
Delete Hot Spare
Rescue Raid Set
Raid Set Information
Create Hot Spare
Select Drives For HotSpare
[ ]E#1Solt#02 : 500.1GB : HDS725050KLA360
[ ]E#1Solt#03 : 500.1GB : ST3500630NS
[ ]E#1Solt#04 : 400.1GB : Hitachi HDT725040VLA360
[ ]E#1Solt#05 : 400.1GB : Hitachi HDT725040VLA360
[ ]E#1Solt#06 : 400.1GB : Hitachi HDT725040VLA360
[ ]E#1Solt#07 : 500.1GB : HDS725050KL360
[ ]E#1Solt#08 : 500.1GB : ST3500630NS
[*]E#1Solt#01 : 400.1GB : Hitachi HDT725040VLA360
Select Hot Spare Type
Dedicated To RaidSet
Dedicated To Enclosure
Global
BIOS CONFIGURATION
58
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Raid Set Function
Raid Set Function
Raid Set
Delete Raid Set
Expand Raid Set
Ofine Raid Set
Activate Raid Set
Create Hot Spare
Delete Hot Spare
Rescue Raid Set
Raid Set Information
Delete Hot Spare
Select The HotSpare Device To Be Deleted
[ ]E#1Solt#3 : 500.1GB : ST3500630NS
[*]E#1Solt#3 : 500.1GB : ST3500630NS
Delete HotSpare?
No
Yes
3.7.2.8 Rescue Raid Set
When the system is powered off in the RAID set update/creation
period, it possibly could disappear due to this abnormal condition.
The “RESCUE” function can recover the missing RAID set informa-
tion. The RAID controller uses the time as the RAID set signature.
The RAID set may have different time after the RAID set is recov-
ered. The “SIGANT” function can regenerate the signature for the
RAID set.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Raid Set Function
Raid Set Function
Raid Set
Delete Raid Set
Expand Raid Set
Ofine Raid Set
Activate Raid Set
Create Hot Spare
Delete Hot Spare
Rescue Raid Set
Raid Set Information
Rescue Raid Set
Enter the Operation Key
Note:
Please contact us to make sure if you need to use rescue
function. Improperly usage may cause conguration
corruption.
BIOS CONFIGURATION
59
Once can manually fail a drive, which is useful in kill-off slow
speed disk. There is nothing physically wrong with the disk. A
manually failed the drive can be rebuilt by the hot spare and
brought back on-line.
3.7.2.9 Raid Set Information
To display RAID set information, move the cursor bar to the de-
sired RAID set number, then press the key. The “Raid Set Enter
Information” will appear.
You can only view information for the RAID set in this screen.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Raid Set Function
Raid Set Function
Raid Set
Delete Raid Set
Expand Raid Set
Ofine Raid Set
Activate Raid Set
Create Hot Spare
Delete Hot Spare
Rescue Raid Set
Raid Set Information
Raid Set Information
Select Raid Set To Display
Raid Set # 000 :3/3 Disk: Normal
Raid Set # 001 :9/9 Disk: Normal
Raid Set # 003 :8/8 Disk: Normal
Raid Set # 004 :3/3 Disk: Normal
Raid Set # 005 :3/3 Disk: Normal
Raid Set # 006 :3/3 Disk: Normal
Raid Set # 007 :3/3 Disk: Normal
Raid Set # 000 :3/3 Disk: Normal
The Raid Set Information
Raid Set Name : Raid Set #000
Member Disks : 7
Raid State : Initializing
Raid Power State : Operating
Total Capacity : 14000.0GB
Free Capacity : 2233.3GB
Min Member Disk Size : 2000.0GB
Member Disk Channels : .E3S1.E3S2.E3S3.
E3S4.E3S5.E3S6.E3S7.
3.7.3 Volume Set Function
A volume set is seen by the host system as a single logical
device; it is organized in a RAID level within the controller utiliz-
ing one or more physical disks. RAID level refers to the level of
data performance and protection of a volume set. A volume set
can consume all of the capacity or a portion of the available disk
capacity of a RAID set. Multiple volume sets can exist on a RAID
set. If multiple volume sets reside on a specied RAID set, all
volume sets will reside on all physical disks in the RAID set. Thus
each volume set on the RAID set will have its data spread evenly
across all the disks in the RAID set rather than one volume set
using some of the available disks and another volume set using
other disks.
BIOS CONFIGURATION
60
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Volume Set Function
The following is the volume set features for the 6Gb/s SAS RAID
controller.
1). Volume sets of different RAID levels may coexist on the same
RAID set and up to 128 volume sets per controller.
2). Up to 128 volume sets can be created in a RAID set.
3). The maximum addressable size of a single volume set is not
limited to 2TB, because the controller is capable of 64-bit LBA
mode. However the operating system itself may not be capable of
addressing more than 2TB.
See Areca website ftp://ftp.areca.com.tw/RaidCards/Docu-
ments/Manual_Spec/ Over2TB_050721.ZIP le for details.
3.7.3.1 Create Volume Set (0/1/10/3/5/6)
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Volume Set Function
Volume Set Functions
Creat Volume Set
Create Raid30/50/60
Delete Volume Set
Modify Volume Set
Check Volume Set
Stop Volume Check
Display Volume Info.
Create Volume Set
BIOS CONFIGURATION
62
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Volume Set Function
Volume Set Functions
Creat Volume Set
Create Raid30/50/60
Delete Volume Set
Modify Volume Set
Check Volume Set
Stop Volume Check
Display Volume Info.
Create Volume Set
Create Volume From Raid Set
Raid Set # 00
Volume Creation
Raid Level : 5
Capacity : 2400.0GB
Stripe Size : 64K
SCSI Channel : 0
SCSI ID : 0
SCSI LUN : 0
Cache Mode : Write Back
Tag Queuing : Enabled
Volume Name : Volume Set # 000
Initialization Mode
Background (Instant Available)
No Init (To Rescue Volume)
Foreground (Faster Completion)
6. Repeat steps 3 to 5 to create additional volume sets.
7. The initialization percentage of volume set will be displayed at
the button line.
• Volume Name
The default volume name will always appear as ARC-1882-VOL
#. You can rename the volume set providing it does not exceed
the 15 characters limit.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Volume Set Function
Volume Set Functions
Creat Volume Set
Create Raid30/50/60
Delete Volume Set
Modify Volume Set
Check Volume Set
Stop Volume Check
Display Volume Info.
Create Volume Set
Create Volume From Raid Set
Raid Set # 00
Volume Creation
Raid Level : 5
Capacity : 2400.0GB
Stripe Size : 64K
SCSI Channel : 0
SCSI ID : 0
SCSI LUN : 0
Cache Mode : Write Back
Tag Queuing : Enabled
Volume Name : ARC-1882-VOL# 000
Edit The Volume Name
RC-1882-VOL# 00
A
BIOS CONFIGURATION
63
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Volume Set Function Volume Creation
Volume Name : ARC-1882-VOL # 000
Raid Level : 5
Capacity : 2400.0GB
Stripe Size : 64K
SCSI Channel : 0
SCSI ID : 0
SCSI LUN : 0
Cache Mode : Write Back
Tag Queuing : Enabled
Raid Level : 5
Select Raid Level
0
0 + 1
3
6
5
• Raid Level
Set the “Raid Level” for the volume set. Highlight “Raid Level”
and press the key. The available RAID levels for the cur-Enter
rent volume set are displayed. Select a RAID level and press
the key to conrm.Enter
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Volume Set Function
Volume Set Functions
Creat Volume Set
Create Raid30/50/60
Delete Volume Set
Modify Volume Set
Check Volume Set
Stop Volume Check
Display Volume Info.
Create Volume Set
Create Volume From Raid Set
Raid Set # 00
Volume Creation
Volume Name : ARC-1882-VOL# 000
Raid Level : 5
Capacity : 160.1GB
Stripe Size : 64K
SCSI Channel : 0
SCSI ID : 0
SCSI LUN : 0
Cache Mode : Write Back
Tag Queuing : Enabled
Capacity : 2400.0GB
Available Capacity : 2400.0GB
Selected Capacity : 2400.0GB
• Capacity
The maximum available volume size is the default value for the
rst setting. Enter the appropriate volume size to t your ap-
plication. The capacity value can be increased or decreased by
the and arrow keys. The capacity of each volume UP DOWN
set must be less than or equal to the total capacity of the RAID
set on which it resides.
BIOS CONFIGURATION
65
• Stripe Size
This parameter sets the size of segment written to each disk in
a RAID 0, 1, 1E, 10, 5, 6, 50 or 60 logical drive. You can set the
stripe size to 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, or 128 KB.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Volume Set Function
Volume Set Functions
Creat Volume Set
Create Raid30/50/60
Delete Volume Set
Modify Volume Set
Check Volume Set
Stop Volume Check
Display Volume Info.
Create Volume Set
Create Volume From Raid Set
Raid Set # 00
Volume Creation
Volume Name : ARC-1882-VOL#000
Raid Level : 5
Capacity : 2400.0GB
Stripe Size : 64K
SCSI Channel : 0
SCSI ID : 0
SCSI LUN : 0
Cache Mode : Write Back
Tag Queuing : Enabled
Stripe Size : 64K
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Volume Set Function
Volume Set Functions
Creat Volume Set
Create Raid30/50/60
Delete Volume Set
Modify Volume Set
Check Volume Set
Stop Volume Check
Display Volume Info.
Create Volume Set
Create Volume From Raid Set
Raid Set # 00
Volume Creation
Volume Name : ARC-1882-VOL#000
Raid Level : 5
Capacity : 2400.0GB
Stripe Size : 64K
SCSI Channel : 0
SCSI ID : 0
SCSI LUN : 0
Cache Mode : Write Back
Tag Queuing : Enabled
SCSI Channel : 0
The 6Gb/s SAS RAID controller function simulates a external
SCSI RAID controller. The host bus represents the SCSI chan-
nel. Choose the “SCSI Channel”. A “Select SCSI Channel” dialog
box appears; select the channel number and press the Enter
key to conrm it.
BIOS CONFIGURATION
66
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Volume Set Function
Volume Set Functions
Creat Volume Set
Create Raid30/50/60
Delete Volume Set
Modify Volume Set
Check Volume Set
Stop Volume Check
Display Volume Info.
Create Volume Set
Create Volume From Raid Set
Raid Set # 00
Volume Creation
Volume Name : ARC-1882-VOL#000
Raid Level : 5
Capacity : 2400.0GB
Stripe Size : 64K
SCSI Channel : 0
SCSI ID : 0
SCSI LUN : 0
Cache Mode : Write Back
Tag Queuing : Enabled
SCSI ID : 0
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Volume Set Function
Volume Set Functions
Creat Volume Set
Create Raid30/50/60
Delete Volume Set
Modify Volume Set
Check Volume Set
Stop Volume Check
Display Volume Info.
Create Volume Set
Create Volume From Raid Set
Raid Set # 00
Volume Creation
Volume Name : ARC-1882-VOL#000
Raid Level : 5
Capacity : 2400.0GB
Stripe Size : 64K
SCSI Channel : 0
SCSI ID : 0
SCSI LUN : 0
Cache Mode : Write Back
Tag Queuing : Enabled
SCSI LUN : 0
• SCSI ID
Each device attached to the 6Gb/s SAS RAID controller, as well
as the 6Gb/s SAS RAID controller itself, must be assigned a
unique SCSI ID number. A SCSI channel can connect up to 15
devices. It is necessary to assign a SCSI ID to each device from
a list of available SCSI IDs.
• SCSI LUN
Each SCSI ID can support up to 8 LUNs. Most 6Gb/s SAS con-
trollers treat each LUN as if it were a SAS disk.
BIOS CONFIGURATION
67
• Cache Mode
User can set the cache mode to either “Write Through” or
“Write Back”.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Volume Set Function
Volume Set Functions
Creat Volume Set
Create Raid30/50/60
Delete Volume Set
Modify Volume Set
Check Volume Set
Stop Volume Check
Display Volume Info.
Create Volume Set
Create Volume From Raid Set
Raid Set # 00
Volume Creation
Volume Name : ARC-1882-VOL#000
Raid Level : 5
Capacity : 2400.0GB
Stripe Size : 64K
SCSI Channel : 0
SCSI ID : 0
SCSI LUN : 0
Cache Mode : Write Back
Tag Queuing : EnabledTag Queuing : Enabled
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Volume Set Function
Volume Set Functions
Creat Volume Set
Create Raid30/50/60
Delete Volume Set
Modify Volume Set
Check Volume Set
Stop Volume Check
Display Volume Info.
Create Volume Set
Create Volume From Raid Set
Raid Set # 00
Volume Creation
Volume Name : ARC-1882-VOL#000
Raid Level : 5
Capacity : 2400.0GB
Stripe Size : 64K
SCSI Channel : 0
SCSI ID : 0
SCSI LUN : 0
Cache Mode : Write Back
Tag Queuing : Enabled
Cache Mode : : Write Back
• Tag Queuing
This option, when enabled, can enhance overall system perfor-
mance under multi-tasking operating systems. The Command
Tag (Drive Channel) function controls the SAS command tag
queuing support for each drive channel. This function should
normally remain enabled. Disabled this function only when us-
ing older drives that do not support command tag queuing.
BIOS CONFIGURATION
68
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Volume Set Function
Volume Set Functions
Create Volume Set
Creat Raid30/50/60
Delete Volume Set
Modify Volume Set
Check Volume Set
Stop Volume Check
Display Volume Info.
Create Raid30/50/60
3.7.3.2 Create Raid30/50/60 (Volume Set
30/50/60)
To create 30/50/60 volume set from RAID set group, move
the cursor bar to the main menu and click on the “Create
Raid30/50/60” link. The “Select The Raid Set To Create Volume
On It” screen will show all RAID set number. Tick on the RAID
set numbers (same disk No per RAID set) that you want to cre-
ate and then click on it.
The created new volume set attribute option allows users to
select the Volume Name, Capacity, Raid Level, Strip Size, SCSI
ID/LUN, Cache Mode, and Tagged Command Queuing. The
detailed description of those parameters can refer to section
3.7.3.1. User can modify the default values in this screen; the
modication procedures are in section 3.7.3.4.
Note:
Raid Level 30 50 and 60 can support up to eight RAID sets
(four pairs).
Create Raid 30/50/60 Free(Capacity)
[*] Raid Set # 000 3000.0GB (3000.0GB)
[*] Raid Set # 001 1000.0GB (8000.0GB)
[ ] Raid Set # 003 150.0GB ( 240.0GB)
[ ] Raid Set # 004 150.0GB ( 240.0GB)
[ ] Raid Set # 005 150.0GB ( 240.0GB)
[ ] Raid Set # 006 150.0GB ( 240.0GB)
[ ] Raid Set # 007 150.0GB ( 240.0GB)
[*] Raid Set # 000 3000.0GB (3000.0GB)
BIOS CONFIGURATION
69
3.7.3.3 Delete Volume Set
To delete volume set from a RAID set, move the cursor bar to
the “Volume Set Functions” menu and select the “Delete Volume
Set” item, then press the key. The “Volume Set Func-Enter
tions” menu will show all Raid Set # items. Move the cursor
bar to a RAID set number, then press the key to show all Enter
volume sets within that RAID set. Move the cursor to the volume
set number that is to be deleted and press the key to Enter
delete it.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Volume Set Function
Volume Set Functions
Create Volume Set
Create Raid30/50/60
Delete Volume Set
Modify Volume Set
Check Volume Set
Stop Volume Check
Display Volume Info.
Delete Volume Set Select Volume To Delete
ARC-1882-VOL#009 (Raid 30/50/60 Vol)
ARC-1882-VOL#014 (Raid 30/50/60 Vol)
ARC-1882-VOL#002 (Raid Set # 001)
ARC-1882-VOL#003 (Raid Set # 002)
ARC-1882-VOL#004 (Raid 30/50/60 Vol)
Delete Volume Set
No
Yes
3.7.3.4 Modify Volume Set
Use this option to modify volume set conguration. To modify
volume set values from RAID set system function, move the
cursor bar to the “Modify Volume Set” item, then press the
Enter key. The “Volume Set Functions” menu will show all RAID
set items. Move the cursor bar to a RAID set number item, then
press the key to show all volume set items. Select the Enter
volume set from the list to be changed, press the key to Enter
modify it.
As shown, volume information can be modied at this screen.
Choose this option to display the properties of the selected
volume set. But user can only modify the last volume set
capacity.
BIOS CONFIGURATION
71
Note:
Power failure may damage the migration data. Please back-
up the RAID data before you start the migration function.
3.7.3.4.2 Volume Set Migration
Migrating occurs when a volume set is migrating from one RAID
level to another, when a volume set strip size changes, or when
a disk is added to a RAID set. Migration state is displayed in the
volume state area of the “Volume Set Information” screen.
3.7.3.5 Check Volume Set
Use this option to verify the correctness of the redundant data in
a volume set. For example, in a system with a dedicated parity
disk drive, a volume set check entails computing the parity of
the data disk drives and comparing those results to the contents
of the dedicated parity disk drive. To check volume set, move
the cursor bar to the “Check Volume Set” item, then press the
Enter key. The “Volume Set Functions” menu will show all RAID
set number items. Move the cursor bar to an RAID set number
item and then press the key to show all volume set items. Enter
Select the volume set to be checked from the list and press En-
ter key to select it. After completed the selection, the conrma-
tion screen appears, press to start the check.Yes
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Volume Set Function
Volume Set Functions
Create Volume Set
Create Raid30/50/60
Delete Volume Set
Modify Volume Set
Check Volume Set
Stop Volume Check
Display Volume Info.
Display Volume Info.
Select Volume To Display
ARC-1880-VOL#009 (Raid 30/50/60 Vol)
VOL#003R30Vo14-1(Raid Set # 002)
VOL#003R30Vo14-2(Raid Set # 003)
VOL#003R30Vo14-3(Raid Set # 004)
ARC-1880-VOL#004 (Raid 30/50/60 Vol)
The Volume Set Information
Volume Set Name : ARC-1882-VOL # 004
Raid Set Name : Raid Set # 02
Volume Capacity : 1200.0GB
Volume State : Migration
SCSI CH/ID/Lun : 0/0/4
Raid Level : 5
Stripe Size : 64K
Block Size : 512 Bytes
Member Disk : 5
Cache Attribute : Write-Back
Tag Queuing : Enabled
BIOS CONFIGURATION
72
3.7.3.6 Stop Volume Set Check
Use this option to stop all of the “Check Volume Set” operations.
3.7.3.7 Display Volume Set Info.
To display volume set information, move the cursor bar to the
desired volume set number and then press the key. The Enter
“Volume Set Information” screen will be shown. You can only
view the information of this volume set in this screen, but can
not modify it.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Volume Set Function
Volume Set Functions
Create Volume Set
Create Raid30/50/60
Delete Volume Set
Modify Volume Set
Check Volume Set
Stop Volume Check
Display Volume Info.
Check Volume Set
Select Volume To Check
ARC-1882-VOL#009 (Raid 30/50/60 Vol)
ARC-1882-VOL#014 (Raid 30/50/60 Vol)
ARC-1882-VOL#002 (Raid Set # 001)
ARC-1882-VOL#003 (Raid Set # 002)
ARC-1882-VOL#004 (Raid 30/50/60 Vol)
Check The Volume ?
No
Yes
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Volume Set Function
Volume Set Functions
Create Volume Set
Create Raid30/50/60
Delete Volume Set
Modify Volume Set
Check Volume Set
Stop Volume Check
Display Volume Info.
Display Volume Info.
Select Volume To Display
ARC-1880-VOL#009 (Raid 30/50/60 Vol)
VOL#003R30Vo14-1(Raid Set # 002)
VOL#003R30Vo14-2(Raid Set # 003)
VOL#003R30Vo14-3(Raid Set # 004)
ARC-1880-VOL#004 (Raid 30/50/60 Vol)
The Volume Set Information
Volume Set Name : ARC-1882-VOL # 004
Raid Set Name : Raid Set # 02
Volume Capacity : 1200.0GB
Volume State : Migration
SCSI CH/ID/Lun : 0/0/4
Raid Level : 5
Stripe Size : 64K
Block Size : 512 Bytes
Member Disk : 5
Cache Attribute : Write-Back
Tag Queuing : Enabled
BIOS CONFIGURATION
74
3.7.4.2 Create Pass-Through Disk
A pass-through disk is not controlled by the 6Gb/s SAS RAID
controller rmware and thus cannot be a part of a volume set.
The disk is available directly to the operating system as an indi-
vidual disk. It is typically used on a system where the operating
system is on a disk not controlled by the 6Gb/s SAS RAID con-
troller rmware. The SCSI Channel/SCSI ID/SCSI LUN, Cache
Mode, and Tag Queuing must be specied to create a pass-
through disk.
3.7.4.3 Modify Pass-Through Disk
Use this option to modify “Pass-Through Disk Attributes”. To
select and modify a pass-through disk from the pool of pass-
through disks, move the “Modify Pass-Through Drive” option and
then press the key. The “Physical Drive Function” menu Enter
will show all pass-through drive number options. Move the cur-
sor bar to the desired number and then press the key to Enter
show all pass-through disk attributes. Select the parameter from
the list to be changed and them press the key to modify Enter
it.
3.7.4.4 Delete Pass-Through Disk
To delete a pass-through drive from the pass-through drive pool,
move the cursor bar to the “Delete Pass-Through Drive” item,
then press the key. The “Delete Pass-Through conrma-Enter
tion” screen will appear; select to delete it.Yes
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Physical Drive Information
Physical Drive Function
View Drive Information
Modify Pass-Through Disk
Delete Pass-Through Disk
Identify Selected Drive
Identify Enclosure
Create Pass-Through
Select The Drive
E#1Solt#2 : 500.1GB : Up..360
E#1Solt#8 : 500.1GB : Free ST3500630NS:
E#1Solt#9 : 400.1GB : Free Hitachi HDT725040VLA360:
E#1Solt#10 : 400.1GB : Free Hitachi HDT725040VLA360:
E#1Solt#11 : 400.1GB : Free Hitachi HDT725040VLA360:
E#1Solt#12 : 500.1GB : Free HDS725050KL360:
E#1Solt#7 : 500.1GB : Free : HDS725050KLA360
Pass-Through Disk Attribute
SCSI ID : 0
SCSI LUN : 0
Cache Mode : Write Back
Tag Queuing : Enabled
SCSI Channel : 0
Create Pass-Through
No
Yes
BIOS CONFIGURATION
75
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Physical Drive Information
Physical Drive Function
View Drive Information
Create Pass-Through Disk
Modify Pass-Through Disk
Identify Selected Drive
Identify Enclosure
Delete Pass-Through
Select The Drive
E#1Slot#2 : 500.1GB Pass Through HDS725050KL360
Delete Pass-Through
No
Yes
Are you Sure?
No
Yes
3.7.4.5 Identify Selected Drive
To prevent removing the wrong drive, the selected disk fault LED
indicator will light for physically locating the selected disk when
the “Identify Selected Device” is selected.
3.7.4.6 Identify Enclosure
To prevent removing the wrong enclosure, the selected Areca
expander enclosure all disks fault LED indicator will light for
physically locating the selected enclosure when the “Identify
Enclosure” is selected. This function will also light the enclosure
LED indicator, if it is existed.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Physical Drive Information
Physical Drive function
View Drive Information
Create Pass-Through Disk
Modify Pass-Through Disk
Delete Pass-Through Disk
Identify Selected Drive
Identify Enclosure
Identify Selected Drive
Select The Drive
E#1Solt#2 : 500.1GB : HDS725050KLA360
E#1Solt#03 : 500.1GB : ST3500630NSRaidSet Member:
E#1Solt#04 : 400.1GB : Hitachi HDT725040VLA360RaidSet Member:
E#1Solt#05 : 400.1GB : Hitachi HDT725040VLA360RaidSet Member:
E#1Solt#06 : 400.1GB : Hitachi HDT725040VLA360RaidSet Member:
E#1Solt#07 : 500.1GB : HDS725050KL360RaidSet Member:
E#1Solt#08 : 500.1GB : ST3500630NSRaidSet Member:
E#1Solt#02 : 500.1GB : RaidSet Member: HDS725050KLA360
Please Check The Device’s LED
BIOS CONFIGURATION
76
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Physical Drive Information
Physical Drive function
View Drive Information
Create Pass-Through Disk
Modify Pass-Through Disk
Delete Pass-Through Disk
Identify Selected Drive
Identify Enclosure
Identify Enclosure
Select The Enclosure
Enclosure#2 : Areca x28-05.75.1.37 000
Enclosure#3 : Areca x28-05.75.1.37 000
Enclosure#4 : Areca x28-05.75.1.37 000
Enclosure#5 : Areca x28-05.75.1.37 000
Enclosure#1 : ARECA SAS RAID Adapter V1.0
3.7.5 Raid System Function
To set the “Raid System Function”, move the cursor bar to the
main menu and select the “Raid System Function” item and then
press key. The “Raid System Function” menu will show Enter
multiple items. Move the cursor bar to an item, then press Enter
key to select the desired function.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Raid System Function
BIOS CONFIGURATION
77
3.7.5.1 Mute The Alert Beeper
The “Mute The Alert Beeper” function item is used to control the
SAS RAID controller beeper. Select and press the key Yes Enter
in the dialog box to turn the beeper off temporarily. The beeper
will still activate on the next event.
3.7.5.2 Alert Beeper Setting
The “Alert Beeper Setting” function item is used to “Disabled” or
“Enabled” the SAS RAID controller alarm tone generator. Select
“Disabled” and press the key in the dialog box to turn the Enter
beeper off.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Raid System Function
Raid System Function
Mute The Alert Beeper
Alert Beeper Setting
Change Password
JBOD/RAID Function
Background Task Priority
SATA NCQ Support
HDD Read Ahead Cache
Volume Data Read Ahead
Hdd Queue Depth Setting
Empty HDD Slot LED
Controller Fan Detection
Auto Activate Raid Set
Disk Write Cache Mode
Capacity Truncation
Mute The Alert Beeper
Mute Alert Beeper
No
Yes
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Raid System Function
Raid System Function
Mute The Alert Beeper
Alert Beeper Setting
Change Password
JBOD/RAID Function
Background Task Priority
SATA NCQ Support
HDD Read Ahead Cache
Volume Data Read Ahead
Hdd Queue Depth Setting
Empty HDD Slot LED
Controller Fan Detection
Auto Activate Raid Set
Disk Write Cache Mode
Capacity Truncation
Alert Beeper Setting
Alert Beeper Setting
Enabled
Disabled
BIOS CONFIGURATION
78
3.7.5.3 Change Password
The manufacture default password is set to 0000. The
password option allows user to set or clear the password pro-
tection feature. Once the password has been set, the user can
monitor and congure the controller only by providing the cor-
rect password. This feature is used to protect the internal RAID
system from unauthorized access. The controller will check the
password only when entering the main menu from the initial
screen. The system will automatically go back to the initial
screen if it does not receive any command in 5 minutes.
To set or change the password, move the cursor to “Raid System
Function” screen, press the “Change Password” item. The “Enter
New Password” screen will appear. Do not use spaces when you
enter the password, If spaces are used, it will lock out the user.
To disable the password, only press key in both the “Enter Enter
New Password” and “Re-Enter New Password” column. The ex-
isting password will be cleared. No password checking will occur
when entering the main menu.
3.7.5.4 JBOD/RAID Function
JBOD is an acronym for “Just a Bunch Of Disk”. A group of
hard disks in a RAID box are not set up as any type of RAID
conguration. All drives are available to the operating system
as an individual disk. JBOD does not provide data redundancy.
User needs to delete the RAID set, when you want to change the
option from the RAID to the JBOD function.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Raid System Function
Raid System Function
Mute The Alert Beeper
Alert Beeper Setting
Change Password
JBOD/RAID Function
Background Task Priority
SATA NCQ Support
HDD Read Ahead Cache
Volume Data Read Ahead
Hdd Queue Depth Setting
Empty HDD Slot LED
Controller Fan Detection
Auto Activate Raid Set
Disk Write Cache Mode
Capacity Truncation
Change Password
Enter New Password
BIOS CONFIGURATION
81
3.7.5.8 Volume Data Read Ahead
The volume read data ahead parameter species the controller
rmware algorithms which process the read ahead data blocks
from the disk. The "Data Read Ahead" parameter is normal
by default. To modify the value, you must set it from the "
Raid System Function" using the 'Volume Data Read Ahead"
option. The default "Normal" option satises the performance
requirements for a typical volume. The "Disabled" value implies
no read ahead. The most efcient value for the controllers
depends on your application. The "Aggressive" value is optimal
for sequential access but it degrades random access.
3.7.5.9 Hdd Queue Depth Setting
This parameter is adjusted the queue depth capacity of NCQ
(SAS HDD) or Tagged Command Queuing (SAS HDD) which
transmits multiple commands to a single target without waiting
for the initial command to complete.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Raid System Function
Raid System Function
Mute The Alert Beeper
Alert Beeper Setting
Change Password
JBOD/RAID Function
Background Task Priority
SATA NCQ Support
HDD Read Ahead Cache
Volume Data Read Ahead
Hdd Queue Depth Setting
Empty HDD Slot LED
Controller Fan Detection
Auto Activate Raid Set
Disk Write Cache Mode
Capacity Truncation
Volume Data Read Ahead
Volume Data Read Ahead
Enabled
Aggressive
Conservative
Disabled
Normal
BIOS CONFIGURATION
82
3.7.5.10 Empty HDD Slot LED
The rmware has added the "Empty HDD Slot LED" option to
setup the fault LED light "ON "or "OFF" when there is no HDD
installed. When each slot has a power LED for the HDD installed
identify, user can set this option to "OFF". Choose option "ON",
the 6Gb/s SAS RAID controller will light the fault LED; if no
HDD installed.
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Raid System Function
Raid System Function
Mute The Alert Beeper
Alert Beeper Setting
Change Password
JBOD/RAID Function
Background Task Priority
SATA NCQ Support
HDD Read Ahead Cache
Volume Data Read Ahead
Hdd Queue Depth Setting
Empty HDD Slot LED
Controller Fan Detection
Auto Activate Raid Set
Disk Write Cache Mode
Capacity Truncation
Hdd Queue Depth Setting
HDD Queue Depth
1
2
4
8
16
32
16
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Raid System Function
Raid System Function
Mute The Alert Beeper
Alert Beeper Setting
Change Password
JBOD/RAID Function
Background Task Priority
SATA NCQ Support
HDD Read Ahead Cache
Volume Data Read Ahead
Hdd Queue Depth Setting
Empty HDD Slot LED
Controller Fan Detection
Auto Activate Raid Set
Disk Write Cache Mode
Capacity Truncation
Empty Slot Led
ON
OFF
ON
Empty HDD Slot LED
BIOS CONFIGURATION
83
3.7.5.11 Controller Fan Detection
The ARC-1882ix series incorporate one big passive heatsink at-
taching a active cooling fan that allows the hot devices such as a
ROC and expander chip to keep cool. In addition, newer systems
already have enough air ow blowing over the controller. If the
systems have provided enough adequate cooling for ROC and
expander chip, user can remove the attaching fan on the big
passive heat sink.
The “Controller Fan Detection” function is available in the rm-
ware for detecting the cooling fan function on the ROC which
uses the active cooling fan. When using the passive heatsink
on the controller, disable the “Controller Fan Detection” func-
tion through this McBIOS RAID manager setting. The following
screen shot shows how to change the McBIOS RAID manager
setting to disable the warning beeper function. (This function
is not available in the web browser setting.)
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Raid System Function
Raid System Function
Mute The Alert Beeper
Alert Beeper Setting
Change Password
JBOD/RAID Function
Background Task Priority
SATA NCQ Support
HDD Read Ahead Cache
Volume Data Read Ahead
Hdd Queue Depth Setting
Empty HDD Slot LED
Controller Fan Detection
Auto Activate Raid Set
Disk Write Cache Mode
Capacity Truncation
Controller Fan Dectection
Controller Fan Detection
Enabled
Disabled
3.7.5.12 Auto Activate Raid Set
When some of the disk drives are removed in power off state
or boot up stage, the RAID set state will change to “Incomplete
State”. But if a user wants to automatically continue to work
while the 6Gb/s SAS RAID controller is powered on, then user
can set the “Auto Activate Raid Set” option to “Enabled”. The
RAID state will change to “Degraded Mode” while it powers on.
BIOS CONFIGURATION
84
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Raid System Function
Raid System Function
Mute The Alert Beeper
Alert Beeper Setting
Change Password
JBOD/RAID Function
Background Task Priority
SATA NCQ Support
HDD Read Ahead Cache
Volume Data Read Ahead
Hdd Queue Depth Setting
Empty HDD Slot LED
Controller Fan Detection
Auto Activate Raid Set
Disk Write Cache Mode
Capacity Truncation
Auto Activate Raid Set
Auto Activate Raid When Power on
Enabled
Disabled
I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Hdd Power Management
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Raid System Function
Raid System Function
Mute The Alert Beeper
Alert Beeper Setting
Change Password
JBOD/RAID Function
Background Task Priority
SATA NCQ Support
HDD Read Ahead Cache
Volume Data Read Ahead
Hdd Queue Depth Setting
Empty HDD Slot LED
Controller Fan Detection
Auto Activate Raid Set
Disk Write Cache Mode
Capacity Truncation
Disk Write Cache Mode
Disk Write Cache Mode
Auto
Disabled
Enabled
Auto
3.7.5.13 Disk Write Cache Mode
User can set the “Disk Write Cache Mode” to Auto, Enabled, or
Disabled. “Enabled” increases speed, “Disabled” increases reli-
ability.
3.7.5.14 Capacity Truncation
Areca RAID controllers use drive truncation so that drives from
different vendors are more likely to be usable as spares for one
another. Drive truncation slightly decreases the usable capac-
ity of a drive that is used in redundant units. The controller
provides three truncation modes in the system conguration:
Multiples Of 10G, Multiples Of 1G and Disabled.

Especificaciones del producto

Marca: Areca
Categoría: controlador
Modelo: ARC-1213-4X
Color del producto: Zwart
Pantalla incorporada: Nee
Peso.: 640 g
Ancho: 130 mm
Profundidad: 110 mm
Altura: 205 mm
Bluetooth: Ja
Interruptor encendido / apagado: Ja
Versión Bluetooth: 2.1+EDR
Tecnología de conectividad: Draadloos
Tipo de fuente de energía: Batterij/Accu
Lector de tarjetas de memoria integrado: Ja
Número de puertos USB 2.0: 1
Energía promedio: 6 W
Número de hablantes: 1
Control del volumen: Draaiknop
Radio FM: Ja
Asa de transporte: Ja
Uso recomendado: Universeel
Conector de 3,5 mm: Ja
Puerto de carga USB: Ja
Número de conductores: 4
Rango de frecuencia: 120 - 18000 Hz
Entrada auxiliar: Ja
banda FM: 87.6 - 180 MHz
Tipo-producto: Draadloze stereoluidspreker

¿Necesitas ayuda?

Si necesitas ayuda con Areca ARC-1213-4X haz una pregunta a continuación y otros usuarios te responderán




controlador Areca Manuales

controlador Manuales

Últimos controlador Manuales