636 lines
49 KiB
HTML
636 lines
49 KiB
HTML
<!DOCTYPE html>
|
|
<html lang="en"><head>
|
|
<meta http-equiv="content-type" content="text/html; charset=UTF-8"><meta charset="utf-8"><meta name="viewport" content="width=device-width,minimum-scale=1,initial-scale=1"><link rel="alternate" type="application/rss+xml" title="Calomel.org RSS Feed" href="https://calomel.org/calomel_rss.xml"><link rel="icon" sizes="16x16" type="image/png" href="data:image/png;base64,AAABAAEAEBAAAAEAIABoBAAAFgAAACgAAAAQAAAAIAAAAAEAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAD///8A////AL+AEBDJgBfvyYEY/8mBGP/JgRj/yYAWUMmAF+/JgRj/yIAWn////wD///8A////AP///wD///8A////AP///wDHgBiAyYEY/8mBGP/JgRj/x4AYYP///wDJgRj/yYEY/8mBGP/IgBaf////AP///wD///8A////AP///wD///8AyIAX38mBGP/JgRj/x4AYYP///wD///8AyIAX38mBGP/JgRj/yYEY/8iAF3D///8A////AP///wD///8AyYAWUMmBGP/JgRj/yYAXz////wD///8A////AP///wDHgBhAx4AYgMeAGIDFgBUw////AP///wD///8A////AMeAGIDJgRj/yYEY/8mBGP/IgBaf////AP///wD///8A////AP///wD///8A////AMeAGCDIgBe/x4AYYP///wC/gBAQyIAXv8mBGP/JgRj/yYEY/8mBGP/JgRj/yIAX3////wD///8A////AP///wC/gBAQyYAX78mBGP/JgBfvx4AYgP///wDHgBggyIAXcMiAF9/JgRj/yYEY/8mAF+////8A////AP///wD///8A////AMWAFTDJgBfvyYAXz7+AEBDIgBePx4AYIP///wD///8Ax4AYQMeAGIDFgBUw////AP///wD///8A////AP///wD///8A////AP///wDHgBhgyYEY/8eAGEC/gBAQxYAVMP///wD///8A////AP///wD///8A////AP///wD///8A////AP///wD///8AyIAX38mAF++/gBAQyIAWn8iAF7////8AyIAXv8iAF6////8A////AP///wD///8A////AP///wD///8A////AMmBGP/IgBdw////AMmAF+/IgBe/////AMmBGP/IgBe/////AP///wD///8A////AP///wD///8A////AMmAF8/IgBeP////AP///wDJgRj/x4AYgP///wDJgBfvyIAXj////wD///8A////AP///wD///8A////AP///wDJgRj/x4AYYP///wD///8Ax4AYQL+AEBD///8AxYAVMMeAGCD///8A////AP///wD///8A////AP///wD///8AyIAXcP///wD///8AyIAXr8mAF+////8A////AMmBGP/IgBe/////AP///wD///8A////AP///wD///8A////AP///wD///8A////AMmAF+/JgBfP////AP///wDJgBfvx4AYgP///wD///8A////AP///wD///8A////AP///wD///8A////AP///wDIgBdwxYAVMP///wD///8A////AP///wD///8A////AP///wD///8A////AP///wD///8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=="><title>MegaCLI Scripts and Commands LSI @ Calomel.org</title><style type="text/css"> body {color:#FFFFFF;display:block;font-family: "Verdana", "sans-serif";margin:0 auto;overflow:auto;overflow-x:hidden;padding:0px;background:#FFFFFF url("/calomel_image.jpg");background-size:100% auto;background-attachment:fixed;background-position:top center;background-repeat:no-repeat;} img {border:0;height:auto;max-width:100%;outline:none;padding-left:5em;padding-right:5em;} hr { border:0; border-top:1px solid #E0E0E0; width:70%; } h2 { color:#707070; padding:3em 0em 0em 0em; } h3 { color:#606060; padding:3em 0em 0em 0em; } p { padding:1em 0em 0.5em 0em; } ul li { padding:0em 0em 0.5em 0em;} .boxiconbottom {padding:10% 0% 15% 0%;text-align:center;background:#FFFFFF;} .boxcenter {background-color:#FFF;color:#303030;font-size: 100%;line-height:1.5;margin:45% auto 0% auto;padding:0% 10% 5% 10%;word-wrap: break-word;} .boxcenter a:link { color:#0000FF; text-decoration:none; } .boxcenter a:hover { color:#0000FF; text-decoration:underline; } .code {background-color:#E8E8E8;border:1px inset;color:#4d3e32;font-family: "monospace";font-size: 100%;line-height:1.5;margin:0.5em auto 0.5em auto;max-height:700px;overflow:auto;word-wrap: normal;padding:1em;} .note {background: #E8E8E8;border-radius: 1em 1em 1em 1em;box-shadow: 0 10px 10px rgba(0,0,0,0.3), -10px 10px 10px rgba(0,0,0,0.2);color: #000000;font-size: 100%;line-height:1.5;margin: 10em auto;overflow: hidden;padding: 1em 1.5em;position: relative;width: 70%;word-wrap: break-word;} .title_main {text-align:center;color:#707070; padding:5em 0em 0em 0em} .title_sub {text-align:center;color:#707070; padding:0em 0em 4em 0em;} .alignleft {float: left;color:#B8B8B8;padding-right:2em;} .alignright {float: right;color:#B8B8B8} .boxcenter a.alignleft:link {color:#B8B8B8; text-decoration:none;} .boxcenter a.alignleft:hover {color:#0000FF; text-decoration:underline;} .boxcenter a.alignright:link {color:#B8B8B8; text-decoration:none;} .boxcenter a.alignright:hover {color:#B8B8B8; text-decoration:none;} </style></head><body><div class="boxcenter"><a class="alignleft" href="https://calomel.org/">home</a><a class="alignleft" href="https://calomel.org/calomel_rss.xml">rss</a><a class="alignleft" href="https://encrypted.google.com/search?&q=site%3Acalomel.org&btnG=Search">search</a><a class="alignright">January 01, 2017</a><h1 class="title_main">MegaCLI Scripts and Commands</h1><hr><h3 class="title_sub">making LSI raid controllers a little easier to work with</h3><p>MegaCLI
|
|
is the command line interface (CLI) binary used to communicate with the
|
|
full LSI family of raid controllers found in Supermicro, DELL (PERC),
|
|
ESXi and Intel servers. The program is a text based command line
|
|
interface (CLI) and is comprised of a single static binary file. We are
|
|
not a fan of graphical interfaces (GUI) and appreciate the control a
|
|
command line program gives over a GUI solution. Using some simple shell
|
|
scripting we can find out the health of the RAID, email ourselves about
|
|
problems and work with failed drives.</p><p>There are many MegaCLI
|
|
command pages which simply rehash the same commands over and over and we
|
|
wanted to offer something more. For our examples we are using Ubuntu
|
|
Linux and FreeBSD with the MegaCli64 binary. All of these same scripts
|
|
and commands work for the 32bit and 64bit binaries.</p><h3>Installing the MegaCLI binary</h3><p>In
|
|
order to communicate with the LSI card you will need the MegaCLI or
|
|
MegaCLI64 (64bit) program. The install should be quite easy, but LSI
|
|
make us jump through a few hoops. This is what we found: </p><ul><li> Go to the LSI Downloads page: <a href="http://www.lsi.com/support/pages/Download-Search.aspx">LSI Downloads</a></li><li> Search by keyword "megacli</li><li> Click on "Management Software and Tools"</li><li> Download the MegaCLI zip file. You will see the same file is for DOS, Windows, Linux and FreeBSD.</li><li> Unzip the file</li><li> In the Linux directory there is an RPM. If you are using Redhat you can install it. For Ubuntu got the next step.</li><li>
|
|
For Ubuntu run "rpm2cpio MegaCli-*.rpm | cpio -idmv" to expand the
|
|
directory structure. You may need to "apt-get install rpm2cpio" .</li><li> For FreeBSD unzip the file in the FreeBSD directory.</li></ul><p>On
|
|
our Ubuntu Linux 64bit and FreeBSD 64bit servers we simply copied
|
|
MegaCli64 (64bit) to /usr/local/sbin/ . You can put the binary anywhere
|
|
you want, but we choose /usr/local/sbin/ because it is in root's path.
|
|
Make sure to secure the binary. Make the owner root and chmod the binary
|
|
to 700 (chown root /usr/local/sbin/MegaCli64; chmod 700
|
|
/usr/local/sbin/MegaCli64). The install is now done. We would like to
|
|
see LSI make a Ubuntu PPA or FreeBSD ports entry sometime in the future,
|
|
but this setup was not too bad.</p><h3>The lsi.sh MegaCLI interface script</h3><p>Once
|
|
you have MegaCLI installed, the following is a script to help in
|
|
getting information from the raid card. The shell script does nothing
|
|
more then execute the commands you normally use on the CLI. The script
|
|
can show the status of the raid and drives. You can identify any drive
|
|
slot by using the blinking light on the chassis. The script can help you
|
|
identify drives which are starting to error out or slow down the raid
|
|
so you can replace drives early. We have also included a "setdefaults"
|
|
method to setup a new raid card to specs we use for our 400+ raids.
|
|
Finally, use the "checkNemail" method to check the raid status and mail
|
|
you with a list of drives and which one is reporting the problem.</p><p>You
|
|
are welcome to copy and paste the following script. We call the script
|
|
"lsi.sh", but you can use any name you wish. just make sure to set the
|
|
full path to the MegaCli binary in the script and make the script
|
|
executable. We tried to comment every method so take a look at the
|
|
script before using it.</p><p></p><pre class="code">#!/bin/bash
|
|
#
|
|
# Calomel.org
|
|
# https://calomel.org/megacli_lsi_commands.html
|
|
# LSI MegaRaid CLI
|
|
# lsi.sh @ Version 0.05
|
|
#
|
|
# description: MegaCLI script to configure and monitor LSI raid cards.
|
|
|
|
# Full path to the MegaRaid CLI binary
|
|
MegaCli="/usr/local/sbin/MegaCli64"
|
|
|
|
# The identifying number of the enclosure. Default for our systems is "8". Use
|
|
# "MegaCli64 -PDlist -a0 | grep "Enclosure Device"" to see what your number
|
|
# is and set this variable.
|
|
ENCLOSURE="8"
|
|
|
|
if [ $# -eq 0 ]
|
|
then
|
|
echo ""
|
|
echo " OBPG .:. lsi.sh $arg1 $arg2"
|
|
echo "-----------------------------------------------------"
|
|
echo "status = Status of Virtual drives (volumes)"
|
|
echo "drives = Status of hard drives"
|
|
echo "ident \$slot = Blink light on drive (need slot number)"
|
|
echo "good \$slot = Simply makes the slot \"Unconfigured(good)\" (need slot number)"
|
|
echo "replace \$slot = Replace \"Unconfigured(bad)\" drive (need slot number)"
|
|
echo "progress = Status of drive rebuild"
|
|
echo "errors = Show drive errors which are non-zero"
|
|
echo "bat = Battery health and capacity"
|
|
echo "batrelearn = Force BBU re-learn cycle"
|
|
echo "logs = Print card logs"
|
|
echo "checkNemail = Check volume(s) and send email on raid errors"
|
|
echo "allinfo = Print out all settings and information about the card"
|
|
echo "settime = Set the raid card's time to the current system time"
|
|
echo "setdefaults = Set preferred default settings for new raid setup"
|
|
echo ""
|
|
exit
|
|
fi
|
|
|
|
# General status of all RAID virtual disks or volumes and if PATROL disk check
|
|
# is running.
|
|
if [ $1 = "status" ]
|
|
then
|
|
$MegaCli -LDInfo -Lall -aALL -NoLog
|
|
echo "###############################################"
|
|
$MegaCli -AdpPR -Info -aALL -NoLog
|
|
echo "###############################################"
|
|
$MegaCli -LDCC -ShowProg -LALL -aALL -NoLog
|
|
exit
|
|
fi
|
|
|
|
# Shows the state of all drives and if they are online, unconfigured or missing.
|
|
if [ $1 = "drives" ]
|
|
then
|
|
$MegaCli -PDlist -aALL -NoLog | egrep 'Slot|state' | awk '/Slot/{if (x)print x;x="";}{x=(!x)?$0:x" -"$0;}END{print x;}' | sed 's/Firmware state://g'
|
|
exit
|
|
fi
|
|
|
|
# Use to blink the light on the slot in question. Hit enter again to turn the blinking light off.
|
|
if [ $1 = "ident" ]
|
|
then
|
|
$MegaCli -PdLocate -start -physdrv[$ENCLOSURE:$2] -a0 -NoLog
|
|
logger "`hostname` - identifying enclosure $ENCLOSURE, drive $2 "
|
|
read -p "Press [Enter] key to turn off light..."
|
|
$MegaCli -PdLocate -stop -physdrv[$ENCLOSURE:$2] -a0 -NoLog
|
|
exit
|
|
fi
|
|
|
|
# When a new drive is inserted it might have old RAID headers on it. This
|
|
# method simply removes old RAID configs from the drive in the slot and make
|
|
# the drive "good." Basically, Unconfigured(bad) to Unconfigured(good). We use
|
|
# this method on our FreeBSD ZFS machines before the drive is added back into
|
|
# the zfs pool.
|
|
if [ $1 = "good" ]
|
|
then
|
|
# set Unconfigured(bad) to Unconfigured(good)
|
|
$MegaCli -PDMakeGood -PhysDrv[$ENCLOSURE:$2] -a0 -NoLog
|
|
# clear 'Foreign' flag or invalid raid header on replacement drive
|
|
$MegaCli -CfgForeign -Clear -aALL -NoLog
|
|
exit
|
|
fi
|
|
|
|
# Use to diagnose bad drives. When no errors are shown only the slot numbers
|
|
# will print out. If a drive(s) has an error you will see the number of errors
|
|
# under the slot number. At this point you can decided to replace the flaky
|
|
# drive. Bad drives might not fail right away and will slow down your raid with
|
|
# read/write retries or corrupt data.
|
|
if [ $1 = "errors" ]
|
|
then
|
|
echo "Slot Number: 0"; $MegaCli -PDlist -aALL -NoLog | egrep -i 'error|fail|slot' | egrep -v ' 0'
|
|
exit
|
|
fi
|
|
|
|
# status of the battery and the amount of charge. Without a working Battery
|
|
# Backup Unit (BBU) most of the LSI read/write caching will be disabled
|
|
# automatically. You want caching for speed so make sure the battery is ok.
|
|
if [ $1 = "bat" ]
|
|
then
|
|
$MegaCli -AdpBbuCmd -aAll -NoLog
|
|
exit
|
|
fi
|
|
|
|
# Force a Battery Backup Unit (BBU) re-learn cycle. This will discharge the
|
|
# lithium BBU unit and recharge it. This check might take a few hours and you
|
|
# will want to always run this in off hours. LSI suggests a battery relearn
|
|
# monthly or so. We actually run it every three(3) months by way of a cron job.
|
|
# Understand if your "Current Cache Policy" is set to "No Write Cache if Bad
|
|
# BBU" then write-cache will be disabled during this check. This means writes
|
|
# to the raid will be VERY slow at about 1/10th normal speed. NOTE: if the
|
|
# battery is new (new bats should charge for a few hours before they register)
|
|
# or if the BBU comes up and says it has no charge try powering off the machine
|
|
# and restart it. This will force the LSI card to re-evaluate the BBU. Silly
|
|
# but it works.
|
|
if [ $1 = "batrelearn" ]
|
|
then
|
|
$MegaCli -AdpBbuCmd -BbuLearn -aALL -NoLog
|
|
exit
|
|
fi
|
|
|
|
# Use to replace a drive. You need the slot number and may want to use the
|
|
# "drives" method to show which drive in a slot is "Unconfigured(bad)". Once
|
|
# the new drive is in the slot and spun up this method will bring the drive
|
|
# online, clear any foreign raid headers from the replacement drive and set the
|
|
# drive as a hot spare. We will also tell the card to start rebuilding if it
|
|
# does not start automatically. The raid should start rebuilding right away
|
|
# either way. NOTE: if you pass a slot number which is already part of the raid
|
|
# by mistake the LSI raid card is smart enough to just error out and _NOT_
|
|
# destroy the raid drive, thankfully.
|
|
if [ $1 = "replace" ]
|
|
then
|
|
logger "`hostname` - REPLACE enclosure $ENCLOSURE, drive $2 "
|
|
# set Unconfigured(bad) to Unconfigured(good)
|
|
$MegaCli -PDMakeGood -PhysDrv[$ENCLOSURE:$2] -a0 -NoLog
|
|
# clear 'Foreign' flag or invalid raid header on replacement drive
|
|
$MegaCli -CfgForeign -Clear -aALL -NoLog
|
|
# set drive as hot spare
|
|
$MegaCli -PDHSP -Set -PhysDrv [$ENCLOSURE:$2] -a0 -NoLog
|
|
# show rebuild progress on replacement drive just to make sure it starts
|
|
$MegaCli -PDRbld -ShowProg -PhysDrv [$ENCLOSURE:$2] -a0 -NoLog
|
|
exit
|
|
fi
|
|
|
|
# Print all the logs from the LSI raid card. You can grep on the output.
|
|
if [ $1 = "logs" ]
|
|
then
|
|
$MegaCli -FwTermLog -Dsply -aALL -NoLog
|
|
exit
|
|
fi
|
|
|
|
# Use to query the RAID card and find the drive which is rebuilding. The script
|
|
# will then query the rebuilding drive to see what percentage it is rebuilt and
|
|
# how much time it has taken so far. You can then guess-ti-mate the
|
|
# completion time.
|
|
if [ $1 = "progress" ]
|
|
then
|
|
DRIVE=`$MegaCli -PDlist -aALL -NoLog | egrep 'Slot|state' | awk '/Slot/{if (x)print x;x="";}{x=(!x)?$0:x" -"$0;}END{print x;}' | sed 's/Firmware state://g' | egrep build | awk '{print $3}'`
|
|
$MegaCli -PDRbld -ShowProg -PhysDrv [$ENCLOSURE:$DRIVE] -a0 -NoLog
|
|
exit
|
|
fi
|
|
|
|
# Use to check the status of the raid. If the raid is degraded or faulty the
|
|
# script will send email to the address in the $EMAIL variable. We normally add
|
|
# this method to a cron job to be run every few hours so we are notified of any
|
|
# issues.
|
|
if [ $1 = "checkNemail" ]
|
|
then
|
|
EMAIL="raidadmin@localhost"
|
|
|
|
# Check if raid is in good condition
|
|
STATUS=`$MegaCli -LDInfo -Lall -aALL -NoLog | egrep -i 'fail|degrad|error'`
|
|
|
|
# On bad raid status send email with basic drive information
|
|
if [ "$STATUS" ]; then
|
|
$MegaCli -PDlist -aALL -NoLog | egrep 'Slot|state' | awk '/Slot/{if (x)print x;x="";}{x=(!x)?$0:x" -"$0;}END{print x;}' | sed 's/Firmware state://g' | mail -s `hostname`' - RAID Notification' $EMAIL
|
|
fi
|
|
fi
|
|
|
|
# Use to print all information about the LSI raid card. Check default options,
|
|
# firmware version (FW Package Build), battery back-up unit presence, installed
|
|
# cache memory and the capabilities of the adapter. Pipe to grep to find the
|
|
# term you need.
|
|
if [ $1 = "allinfo" ]
|
|
then
|
|
$MegaCli -AdpAllInfo -aAll -NoLog
|
|
exit
|
|
fi
|
|
|
|
# Update the LSI card's time with the current operating system time. You may
|
|
# want to setup a cron job to call this method once a day or whenever you
|
|
# think the raid card's time might drift too much.
|
|
if [ $1 = "settime" ]
|
|
then
|
|
$MegaCli -AdpGetTime -aALL -NoLog
|
|
$MegaCli -AdpSetTime `date +%Y%m%d` `date +%H:%M:%S` -aALL -NoLog
|
|
$MegaCli -AdpGetTime -aALL -NoLog
|
|
exit
|
|
fi
|
|
|
|
# These are the defaults we like to use on the hundreds of raids we manage. You
|
|
# will want to go through each option here and make sure you want to use them
|
|
# too. These options are for speed optimization, build rate tweaks and PATROL
|
|
# options. When setting up a new machine we simply execute the "setdefaults"
|
|
# method and the raid is configured. You can use this on live raids too.
|
|
if [ $1 = "setdefaults" ]
|
|
then
|
|
# Read Cache enabled specifies that all reads are buffered in cache memory.
|
|
$MegaCli -LDSetProp -Cached -LAll -aAll -NoLog
|
|
# Adaptive Read-Ahead if the controller receives several requests to sequential sectors
|
|
$MegaCli -LDSetProp ADRA -LALL -aALL -NoLog
|
|
# Hard Disk cache policy enabled allowing the drive to use internal caching too
|
|
$MegaCli -LDSetProp EnDskCache -LAll -aAll -NoLog
|
|
# Write-Back cache enabled
|
|
$MegaCli -LDSetProp WB -LALL -aALL -NoLog
|
|
# Continue booting with data stuck in cache. Set Boot with Pinned Cache Enabled.
|
|
$MegaCli -AdpSetProp -BootWithPinnedCache -1 -aALL -NoLog
|
|
# PATROL run every 672 hours or monthly (RAID6 77TB @60% rebuild takes 21 hours)
|
|
$MegaCli -AdpPR -SetDelay 672 -aALL -NoLog
|
|
# Check Consistency every 672 hours or monthly
|
|
$MegaCli -AdpCcSched -SetDelay 672 -aALL -NoLog
|
|
# Enable autobuild when a new Unconfigured(good) drive is inserted or set to hot spare
|
|
$MegaCli -AdpAutoRbld -Enbl -a0 -NoLog
|
|
# RAID rebuild rate to 60% (build quick before another failure)
|
|
$MegaCli -AdpSetProp \{RebuildRate -60\} -aALL -NoLog
|
|
# RAID check consistency rate to 60% (fast parity checks)
|
|
$MegaCli -AdpSetProp \{CCRate -60\} -aALL -NoLog
|
|
# Enable Native Command Queue (NCQ) on all drives
|
|
$MegaCli -AdpSetProp NCQEnbl -aAll -NoLog
|
|
# Sound alarm disabled (server room is too loud anyways)
|
|
$MegaCli -AdpSetProp AlarmDsbl -aALL -NoLog
|
|
# Use write-back cache mode even if BBU is bad. Make sure your machine is on UPS too.
|
|
$MegaCli -LDSetProp CachedBadBBU -LAll -aAll -NoLog
|
|
# Disable auto learn BBU check which can severely affect raid speeds
|
|
OUTBBU=$(mktemp /tmp/output.XXXXXXXXXX)
|
|
echo "autoLearnMode=1" > $OUTBBU
|
|
$MegaCli -AdpBbuCmd -SetBbuProperties -f $OUTBBU -a0 -NoLog
|
|
rm -rf $OUTBBU
|
|
exit
|
|
fi
|
|
|
|
### EOF ###
|
|
</pre><p></p><h3>How do I use the lsi.sh script ?</h3><p>First, execute
|
|
the script without any arguments. The script will print out the "help"
|
|
statement showing all of the available commands and a very short
|
|
description of the function. Inside the script you can also see we also
|
|
put in detailed comments.</p><p>For example, lets look at the status of
|
|
the RAID volumes or what LSI calls virtual drives. Run the script with
|
|
the "status" argument. This will simply print the details of the raid
|
|
drives and if PATROL or Check Consistency is running. In our example we
|
|
have two(2) RAID6 volumes of 18.1TB each. The first array is "Partially
|
|
Degraded" and the second is "Optimal" which means it is healthy.</p><p></p><pre class="code">calomel@lsi:~# ./lsi.sh status
|
|
|
|
Adapter 0 -- Virtual Drive Information:
|
|
Virtual Drive: 0 (Target Id: 0)
|
|
Name :
|
|
RAID Level : Primary-6, Secondary-0, RAID Level Qualifier-3
|
|
Size : 18.188 TB
|
|
Sector Size : 512
|
|
Parity Size : 3.637 TB
|
|
State : Partially Degraded
|
|
Strip Size : 256 KB
|
|
Number Of Drives : 12
|
|
Span Depth : 1
|
|
Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU
|
|
Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU
|
|
Default Access Policy: Read/Write
|
|
Current Access Policy: Read/Write
|
|
Disk Cache Policy : Enabled
|
|
Encryption Type : None
|
|
PI type: No PI
|
|
|
|
Is VD Cached: No
|
|
|
|
|
|
Virtual Drive: 1 (Target Id: 1)
|
|
Name :
|
|
RAID Level : Primary-6, Secondary-0, RAID Level Qualifier-3
|
|
Size : 18.188 TB
|
|
Sector Size : 512
|
|
Parity Size : 3.637 TB
|
|
State : Optimal
|
|
Strip Size : 256 KB
|
|
Number Of Drives : 12
|
|
Span Depth : 1
|
|
Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU
|
|
Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU
|
|
Default Access Policy: Read/Write
|
|
Current Access Policy: Read/Write
|
|
Disk Cache Policy : Enabled
|
|
Encryption Type : None
|
|
PI type: No PI
|
|
|
|
Is VD Cached: No
|
|
|
|
###############################################
|
|
|
|
Adapter 0: Patrol Read Information:
|
|
|
|
Patrol Read Mode: Auto
|
|
Patrol Read Execution Delay: 672 hours
|
|
Number of iterations completed: 2
|
|
Current State: Stopped
|
|
Patrol Read on SSD Devices: Disabled
|
|
|
|
Exit Code: 0x00
|
|
###############################################
|
|
|
|
Check Consistency on VD #0 is not in progress.
|
|
Check Consistency on VD #1 is not in progress.
|
|
|
|
Exit Code: 0x00
|
|
</pre><p></p><h3>Why is the first volume is degraded ?</h3><p>The first
|
|
virtual disk lost a drive, which was already replaced and is now
|
|
rebuilding. We can look at the status of all the drives using the lsi.sh
|
|
script and the "drives" argument. You can see slot number 9 is the
|
|
drive which is rebuilding.</p><p></p><pre class="code">calomel@lsi:~# ./lsi.sh drives
|
|
|
|
Slot Number: 0 - Online, Spun Up
|
|
Slot Number: 1 - Online, Spun Up
|
|
Slot Number: 2 - Online, Spun Up
|
|
Slot Number: 3 - Online, Spun Up
|
|
Slot Number: 4 - Online, Spun Up
|
|
Slot Number: 5 - Online, Spun Up
|
|
Slot Number: 6 - Online, Spun Up
|
|
Slot Number: 7 - Online, Spun Up
|
|
Slot Number: 8 - Online, Spun Up
|
|
Slot Number: 9 - Rebuild
|
|
Slot Number: 10 - Online, Spun Up
|
|
Slot Number: 11 - Online, Spun Up
|
|
Slot Number: 12 - Online, Spun Up
|
|
Slot Number: 13 - Online, Spun Up
|
|
Slot Number: 14 - Online, Spun Up
|
|
Slot Number: 15 - Online, Spun Up
|
|
Slot Number: 16 - Online, Spun Up
|
|
Slot Number: 17 - Online, Spun Up
|
|
Slot Number: 18 - Online, Spun Up
|
|
Slot Number: 19 - Online, Spun Up
|
|
Slot Number: 20 - Online, Spun Up
|
|
Slot Number: 21 - Online, Spun Up
|
|
Slot Number: 22 - Online, Spun Up
|
|
Slot Number: 23 - Online, Spun Up
|
|
|
|
</pre><p></p><h3>When will the rebuild be finished ?</h3><p>The card
|
|
will only tell use how far the rebuild is done and how long the process
|
|
has been running. Using the "progress" script argument we see the
|
|
rebuild is 32% done and the rebuild has taken 169 minutes so far. Since
|
|
the rebuild is close enough to 33% done we simply multiply the time
|
|
taken (169 minutes) times 3 to derive the total time of 507 minutes or
|
|
8.45 hours if the load on the raid is the same to completion.</p><p></p><pre class="code">calomel@lsi:~#./lsi.sh progress
|
|
|
|
Rebuild Progress on Device at Enclosure 8, Slot 9 Completed 32% in 169 Minutes.
|
|
|
|
</pre><p></p><p></p><div class="note"> Want more speed out of FreeBSD ? Check out our <a href="https://calomel.org/freebsd_network_tuning.html">FreeBSD Network Tuning</a> guide where we enhance 1 gigabit and 10 gigabit network configurations. </div><p></p><h3>How does the lsi.sh script check errors and send out email ?</h3><p>The
|
|
"checkNemail" argument will check the status of the volumes, also
|
|
called virtual drives, and if the string degraded or error is found will
|
|
send out email. Make sure to set the $EMAIL variable to your email
|
|
address in the script. The output of the email shows slot 9 rebuilding.
|
|
The first virtual drive in this example contain slots 0 through 11. If
|
|
the physical drive was bad on the other hand we would see slot 9 as
|
|
Unconfigured(bad) , Unconfigured(good) or even Missing. </p><p></p><pre class="code">Date: Wed, 20 Feb 2033 17:01:11 -0500
|
|
From: root@localhost
|
|
To: raidadmin@localhost
|
|
Subject: calomel.org - RAID Notification
|
|
|
|
Slot Number: 0 - Online, Spun Up
|
|
Slot Number: 1 - Online, Spun Up
|
|
Slot Number: 2 - Online, Spun Up
|
|
Slot Number: 3 - Online, Spun Up
|
|
Slot Number: 4 - Online, Spun Up
|
|
Slot Number: 5 - Online, Spun Up
|
|
Slot Number: 6 - Online, Spun Up
|
|
Slot Number: 7 - Online, Spun Up
|
|
Slot Number: 8 - Online, Spun Up
|
|
Slot Number: 9 - Rebuild
|
|
Slot Number: 10 - Online, Spun Up
|
|
Slot Number: 11 - Online, Spun Up
|
|
Slot Number: 12 - Online, Spun Up
|
|
Slot Number: 13 - Online, Spun Up
|
|
Slot Number: 14 - Online, Spun Up
|
|
Slot Number: 15 - Online, Spun Up
|
|
Slot Number: 16 - Online, Spun Up
|
|
Slot Number: 17 - Online, Spun Up
|
|
Slot Number: 18 - Online, Spun Up
|
|
Slot Number: 19 - Online, Spun Up
|
|
Slot Number: 20 - Online, Spun Up
|
|
Slot Number: 21 - Online, Spun Up
|
|
Slot Number: 22 - Online, Spun Up
|
|
Slot Number: 23 - Online, Spun Up
|
|
|
|
</pre><p></p><p>We prefer to run the script with "checkNemail" in a cron
|
|
job. This way when the raid has an issue we get notification. The
|
|
following cron job will run the script every two(2) hours. As long as
|
|
the raid is degraded you will get email. We see this function as a
|
|
reminder to check on the raid if it is not finished rebuilding by
|
|
morning.</p><p></p><pre class="code">SHELL=/bin/bash
|
|
PATH=/bin:/sbin:/usr/bin:/usr/sbin
|
|
#
|
|
#minute (0-59)
|
|
#| hour (0-23)
|
|
#| | day of the month (1-31)
|
|
#| | | month of the year (1-12 or Jan-Dec)
|
|
#| | | | day of the week (0-6 with 0=Sun or Sun-Sat)
|
|
#| | | | | commands
|
|
#| | | | | |
|
|
# raid status, check and report
|
|
00 */2 * * * /root/lsi.sh checkNemail
|
|
</pre><p></p><h2>Questions?</h2><p><b>How do I setup two(2) 12 drive RAID6 arrays in a 24 slot chassis ?</b></p><p>Using
|
|
two commands we can configure the drives from 0 through 11 in the first
|
|
RAID6 array. Then do the same for the next virtual drive with drives 12
|
|
through 23. The directive "-r6" stands for RAID6 which is a raid with
|
|
two parity drives and a bit safer then RAID5. Using 2TB drives this will
|
|
make two(2) 18.1 terabyte raid volumes when formatted with XFS.
|
|
Initialization takes around 19 hours.</p><p></p><pre class="code">MegaCli64 -CfgLdAdd -r6'[8:0,8:1,8:2,8:3,8:4,8:5,8:6,8:7,8:8,8:9,8:10,8:11]' -a0 -NoLog
|
|
MegaCli64 -CfgLdAdd -r6'[8:12,8:13,8:14,8:15,8:16,8:17,8:18,8:19,8:20,8:21,8:22,8:23]' -a0 -NoLog
|
|
</pre><p></p><p></p><p><b>How do I setup raid 1+0 ?</b></p><p>RAID 10 is
|
|
striping of mirrored arrays and requires a minimum of 4 drives. We will
|
|
setup slot 0 and 1 as one mirror (Array0) and slots 2 and 3 as the
|
|
second mirror (Array1). Then we (RAID0) stripe between both sets of
|
|
raid1 mirrors. In most cases RAID 10 provides better throughput and
|
|
latency than all other RAID levels except RAID 0 (which wins in
|
|
throughput, but loses in data safety). RAID10 is the preferable RAID
|
|
level for I/O-intensive applications such as database, email and web
|
|
servers as it is fast and provides data integrity.</p><p></p><pre class="code">MegaCli64 -CfgSpanAdd -r10 -Array0[8:0,8:1] -Array1[8:2,8:3] -a0 -NoLog
|
|
</pre><p></p><p></p><p><b>What do the "Cache Policy" values mean ?</b></p><p>Cache
|
|
Policy's are how the raid card uses on board RAM to collect data before
|
|
writing out to disk or to read data before the system asks for it.
|
|
Write cache is used when we have a lot of data to write and it is faster
|
|
to write data sequentially to disk instead of writing small chunks.
|
|
Read cache is used when the system has asked for some data and the raid
|
|
card keeps the data in cache in case the system asks for the same data
|
|
again. It is always faster to read and write to cache then to access
|
|
spinning disks. Understand that you should only use caching if you have
|
|
good UPS power to the system. If the system looses power and does not
|
|
flush the cache it is possible to loose data. No one wants that. Lets
|
|
look at each cache policy LSI raid card use.</p><ul><li><b>WriteBack </b>
|
|
uses the card's cache to collect enough data to make a series of long
|
|
sequential writes out to disk. This is the fastest write method.</li><li><b>WriteThrough </b>
|
|
tells the card to write all data directly to disk without cache. This
|
|
method is quite slow by about 1/10 the speed of WriteBack, but is safer
|
|
as no data can be lost that was in cache when the machine's power fails.</li><li><b>ReadAdaptive </b>
|
|
uses an algorithm to see if when the OS asks for a bunch of data blocks
|
|
sequentially, if we should read a few more sequential blocks because
|
|
the OS _might_ ask for those too. This method can lead to good speed
|
|
increases.</li><li><b>ReadAheadNone </b> tells the raid card to only read the data off the raid disk if it was actually asked for. No more, no less.</li><li><b>Cached </b>
|
|
allows the general use of the cards cache for any data which is read or
|
|
written. Very efficient if the same data is accessed over and over
|
|
again.</li><li><b>Direct </b> is straight access to the disk without
|
|
ever storing data in the cache. This can be slow as any I/O has to touch
|
|
the disk platters.</li><li><b>Write Cache OK if Bad BBU </b> tells the
|
|
card to use write caching even if the Battery Backup Unit (BBU) is bad,
|
|
disabled or missing. This is a good setting if your raid card's BBU
|
|
charger is bad, if you do not want or can't to replace the BBU or if you
|
|
do not want WriteThrough enabled during a BBU relearn test.</li><li><b>No Write Cache if Bad BBU </b>
|
|
if the BBU is not available for any reason then disable WriteBack and
|
|
turn on WriteThrough. This option is safer for your data, but the raid
|
|
card will switch to WriteThrough during a battery relearn cycle.</li><li><b>Disk Cache Policy: Enabled</b>
|
|
Use the hard drive's own cache. For example if data is written out the
|
|
drives this option lets the drives themselves cache data internally
|
|
before writing data to its platters.</li><li><b>Disk Cache Policy: Disabled</b> does not allow the drive to use any of its own internal cache.</li></ul><p><b>So how fast is the raid volume with caching enabled and disabled ?</b>
|
|
A simple test is using hdparm to show disk access. Caching allows this
|
|
test to run two(2) to three(3) times faster on the exact same hardware.
|
|
For our machines we prefer to use caching. </p><pre class="code">## Enable caching on the LSI and disks
|
|
|
|
$ hdparm -tT /dev/sdb1
|
|
/dev/sdb1:
|
|
Timing cached reads: 18836 MB in 2.00 seconds = 9428.07 MB/sec
|
|
Timing buffered disk reads: 1403 MB in 3.00 seconds = 467.67 MB/sec
|
|
|
|
|
|
## Disable all caching on the LSI card and disks
|
|
|
|
$ hdparm -tT /dev/sdb1
|
|
/dev/sdb1:
|
|
Timing cached reads: 6743 MB in 2.00 seconds = 3371.76 MB/sec
|
|
Timing buffered disk reads: 587 MB in 3.01 seconds = 198.37 MB/sec
|
|
</pre><p></p><p></p><p><b>How about a FreeBSD ZFS raid-z2 array using the LSI raid card ?</b></p><p>ZFS
|
|
on FreeBSD is one of the absolute best file systems we have ever used.
|
|
It is very fast, stable and joy to use. Lets look at setting up a
|
|
raid-z2 ZFS pool using 12 separate hard drives all connected through an
|
|
LSI MegaRAID controller in a JBOD (Just a Bunch Of Disks) like
|
|
configuration.</p><p>The LSI MegaRAID native JBOD mode does not work
|
|
very well and we do not recommend using it. If you use LSI JBOD mode
|
|
then all of the caching algorithms on the raid card are disabled and for
|
|
some reason the drive devices are not exported to FreeBSD. The working
|
|
solution is to setup all of the individual drives as separate RAID0
|
|
(raid zero) arrays and bind them all together using ZFS. We are
|
|
currently using raids in this setup in live production and they work
|
|
without issue.</p><p>For this example we are going to configure 12
|
|
RAID-0 LDs, each consisting of a single disk and then use ZFS to make
|
|
the raid-z2 (RAID6) volume. The LSI setup will be as close to JBOD mode
|
|
as we can get, but the advantage of this mode is it allows caching and
|
|
optimization algorithms to be used on the raid card. Here's the RAID-0
|
|
LD and ZFS creation commands: </p><pre class="code"># Set slots 0-11 to 12 individual RAID0 volumes. This is just a simple while
|
|
# loop to go through all 12 drives. Use "./lsi.sh status" script to see all the
|
|
# volumes afterwards.
|
|
i=0; while [ $i -le 11 ] ; do MegaCli64 -cfgldadd -r0[8:${i}] WB RA Cached CachedBadBBU -strpsz512 -a0 -NoLog ; i=`expr $i + 1`; done
|
|
|
|
# Create a RAID-Z2 (RAID6) ZFS volume out of 12 drives called "tank". Creation
|
|
# time of the ZFS raid is just a few seconds compared to creating a RAID6
|
|
# volume through the raid card which initializes in around 19 hours.
|
|
zpool create tank raidz2 mfid0 mfid1 mfid2 mfid3 mfid4 mfid5 mfid6 mfid7 mfid8 mfid9 mfid10 mfid11
|
|
|
|
# Done! It is that easy. You should now see a drive mounted
|
|
# as "tank" using "df -h". Check the volume with "zpool status"
|
|
|
|
# OPTIONAL: We use two(2) Samsung 840 Pro 256MB SSD drives as L2ARC cache
|
|
# drives. The SSD drives are in slot 12 and 13. This means that up to 512GB of
|
|
# the most frequently accessed data can be kept in SSD cache and not read from
|
|
# spinning media. This greatly speeds up access times. We use two cache drives,
|
|
# compared to just one 512MB, so _when_ one SSD dies the other will take on the
|
|
# cache load (now up to 256MB) till the failed drive is replaced.
|
|
MegaCli64 -cfgldadd -r0[8:12] WB RA Cached CachedBadBBU -strpsz512 -a0 -NoLog
|
|
MegaCli64 -cfgldadd -r0[8:13] WB RA Cached CachedBadBBU -strpsz512 -a0 -NoLog
|
|
zpool add tank cache mfid12
|
|
zpool add tank cache mfid13
|
|
|
|
</pre><p></p><p><b>We lost a ZFS drive! How to replace a bad disk</b></p><p>Lets
|
|
say the drive in slot 5 died, was removed or needs to be replaced due
|
|
to reported errors. ZFS says the "tank" pool is DEGRADED using the
|
|
"zpool status" command. We just need to pull out the old slot 5 drive,
|
|
put in the new drive in slot 5, configure the new drive for RAID0 on the
|
|
LSI card and then tell FreeBSD ZFS to replace the old dead drive with
|
|
the new one we just inserted. Sounds like a lot of steps, but it is
|
|
really easy! </p><pre class="code"># First, replace the old drive with the new drive in slot 5. Then check the
|
|
# status of slot 5 by running "./lsi.sh drives"
|
|
|
|
# OPTIONAL: If the drive comes up as Unconfigured(bad) using "./lsi.sh drives"
|
|
# just run "./lsi.sh good 5" to make slot five(5) Unconfigured(good). OR,
|
|
# manually run the following two(2) MegaCli64 commands to remove any foreign
|
|
# configurations and make the drive in enclosure 8, slot 5 Unconfigured(good)
|
|
./lsi.sh good 5
|
|
-OR manually type-
|
|
MegaCli64 -CfgForeign -Clear -aALL -NoLog
|
|
MegaCli64 -PDMakeGood -PhysDrv[8:5] -a0 -NoLog
|
|
|
|
# Configure the new drive in slot 5 for RAID0 through the LSI controller.
|
|
# Make sure the drive is in Unconfigured(good) status according to "./lsi.sh
|
|
# drives" script found at the top of this page.
|
|
MegaCli64 -cfgldadd -r0[8:5] WB RA Cached CachedBadBBU -strpsz512 -a0 -NoLog
|
|
|
|
# Add the new drive in slot 5 (mfid5) into ZFS. The "zpool replace" command
|
|
# will replace the old mfid5 (first mention) with the new mfid5 (second
|
|
# mention). Our setup resilvered the tank pool at 1.78GB/s using all 6 CPU
|
|
# cores at a load of 4.3. Resilvering 3TB of data takes 28 minutes.
|
|
zpool replace tank mfid5 mfid5
|
|
|
|
# OPTIONAL: Since we removed the virtual drive (slot 5) and then added a
|
|
# virtual drive back in, we need to re-apply the default cache settings to the
|
|
# RAID0 volumes on the LSI card. Use "./lsi.sh status" to look at slot 5 and
|
|
# compare its values to the other drives if your are interested. Setting our
|
|
# preferred defaults is easily done using our lsi.sh script found at the
|
|
# beginning of this page and can be applied to active, live raids.
|
|
./lsi.sh setdefaults
|
|
|
|
# Done!
|
|
</pre><p></p><p><b>How fast is a ZFS RAID through the LSI MegaRAID controller ?</b></p><p>Check out our <a href="https://calomel.org/zfs_raid_speed_capacity.html">FreeBSD ZFS Raid Speeds, Safety and Capacity</a> page. We examine more then a dozen different ZFS raid configurations and compare each of them.</p><p><b>What happens when the Battery Backup Unit (BBU) is bad, disabled or missing ?</b></p><p>The
|
|
LSI raid BBU allows the raid controller to cache data before being
|
|
written to the raid disks. Without the battery backup unit the raid card
|
|
can not guarantee data in the card's cache will be written to the
|
|
physical disks if the power goes out. So, if the the BBU is bad, if the
|
|
raid card is running a "battery relearn" test or if the BBU is disabled
|
|
then the cached Write-Back policy is automatically disabled and
|
|
Write-Through is enabled. The result of the direct to disk Write-Through
|
|
policy is writes become an order of magnitude slower.</p><p>As a test
|
|
we disabled cached Write-Back on our test raid. The bonnie++ benchmark
|
|
test resulted in writes of 121MB/sec compared to enabling Write-Back and
|
|
writing at 505MB/sec.</p><p>You can check the status of your BBU using the following command. </p><pre class="code">MegaCli64 -AdpBbuCmd -GetBbuStatus -a0 -NoLog | egrep -i 'charge|battery
|
|
</pre><p></p><p>You should be able to find new BBU units for as little
|
|
as $40 searching online. LSI will sell the same unit to you for well
|
|
over $100 each. The biggest problem is replacing the battery unit since
|
|
it is in the case and you will need to unrack the server, pull off the
|
|
top and replace the battery. Probably not that bad if you have one
|
|
server in the office, but it is quite a job to unrack a hundred raids in
|
|
a remote data center. These batteries should have been designed to be
|
|
hot swappable from the rear of the rack mounted chassis in the first
|
|
place.</p><p><b>What if you do not want to or can not replace the BBU ?</b></p><p>Truthfully,
|
|
the raid will work perfectly fine other then higher latency, more CPU
|
|
usage and lower transfer speeds. If you want to you can simply force the
|
|
raid card to use write-back cache even if the BBU is dead or missing
|
|
with the following command. We use the CachedBadBBU option on raid cards
|
|
which work perfectly fine, but the BBU recharge circuit does not work.
|
|
Please make sure your system in on a reliable UPS as you do not want to
|
|
loose any data still in cache and not yet written out to disk. After you
|
|
execute this command you should see your volume's "Current Cache
|
|
Policy" will include "Write Cache OK if Bad BBU" instead of just "No
|
|
Write Cache if Bad BBU". </p><pre class="code">MegaCli64 -LDSetProp CachedBadBBU -LAll -aAll -NoLog
|
|
</pre><p></p> You may also want to check out this post, <a href="http://yo61.com/dell-drac-bbu-auto-learn-tests-kill-disk-performance.html">auto-learn tests kill disk performance</a>.
|
|
Remember, if your virtual disk's "Current Cache Policy" is "No Write
|
|
Cache if Bad BBU" and the raid card goes into battery relearn mode all
|
|
write caching is disabled as the battery is temporarily offline. Take a
|
|
look at the graphs to see the severe performance degradation they
|
|
experienced. Of course if you enabled the CachedBadBBU option then you
|
|
do not have to worry about when battery relearn mode runs as your cache
|
|
will always ibe enabled.<p></p><p></p><p></p><hr></div><div class="boxiconbottom"><a href="https://calomel.org/calomel_at.html"><img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADAAAAAwCAYAAABXAvmHAAAAAXNSR0IArs4c6QAAAAZiS0dEAP8A/wD/oL2nkwAAAAlwSFlzAAEQkAABEJABiazSuAAAAAd0SU1FB94DBhMdKSWVG6oAAAUvSURBVGje7ZlfiFZFGMZ/z3GtXSsQhYSUvMmrYiEkMSIMo0wzUKu1oKBSCDKkIAlSzjBDXiRRYRh0U2SQZK1G/w26qIx2A1GMUpAuoogukgQLV9bt6cKZPK677a647Sfse/Wd882Zef88877POwOTMimTMikTKQKo65qU0r8vQwiyrRbT1Skll4eis0IIxBjLy6WSFgIzbUtqDRtsI8nAUds9KaVPsqNPRyCEMN32e5IWAB3ND1sCJmc78gTwLbAixnhMdV1Pk7QLWAK4wKqFpei4B1hVAcuBJdnbuhj2bdZ1CbC8AtZfdJknQ8r2+krSfNsDkvoGhakVoYNtbPcBA5LmV0C7pH7bh2x/0MBYKxnhnNb/Bj6QdBjot91eZUXbgU5Jc4FV2Ri1QDR8BjX6wfY9wFzbndnxrsrGlVQBncBLkpZJWgv83HSB/4e86jNSnn8G1gLLJb0oqbNRoFQNqsrOFh4GpgKdtjfYPpI/Gm9oOXtbko4AGyR12p5q+xAwd3CqbxuCWljSpcA2YBFwP/AG8Iik5xpGaJyUx/bTwGvA75J2SOoCqqHWrYbjRznf3ifpO0mXxBi3ADNsdwN9DTT5AmCcPGc3MCOltEXSVEnf2b6vqdM5yoYQPIqqdxR4LMa4M1OPhbYfk/TgBfL+m8ArMcaePH8X8Aowc6RoV6Ngq84T7ajr+gWAGGOPpDW2rwW+HGP9cINnfWn7WttrivJ5jR2ZUI4I1ZEiMFQ0vgDujjEebVDbu4HngTmS2nIyoWQLn6a2BZangF8kPRVj7G5Q+BnArrzvRr3HqjGEWVmXRcDeuq5vKZQ2pdQtaZ6kx4GDmYqX6FlS6S8OSnpc0rwYY3cIoSi/CNg7VuXHGoEmNy+b7tmU0uZmQxRCuML2SknbgMvz+D+BdcDuGOPxZkMSQthoe5Ok9vPZPNVYPzjtzNN8RNI7wFndHHA8pbTd9ixgu6TttmfFGLcDx8ugxjfvSjp5vhltrBEo4f0JuDPG+H325qXAspTS7gKr0uU1MN7s/FYCH6eUTubn6yR9BFw9nhAq4/6yfXNK6UBe/BpJu4F5wGe2V6eUTjQVLr9DCB22dwK3AUeAFSmlH/OY621/JWnacDn/vA1opLMTwE0ppf0hhDbgDts7JXU0PNcHrAI+izEOZOWmALcDu2y3ZxhK0gmgy/anKaVTIYTrga9td+QeWBfCgDLRb7ZXpJR6QwhXAZuBhwaNcemYJL0NbMq/NwNdQ43N714HNsUYf63reqGk94BZo4FTNZqsA5wE7k0p9dZ1vRDotf3QoOg0uRTAaqBHUk/2MoMUahLDh4HeEMKClFIPcC9wcjTktxoJ85JOAUtjjHtDCBslfZMLVvlfw3ApcgWf2WgDhxtrYI7t3hDCMzHGr2wvy2v/Z3YaCUJ/2F6WWeGrwOJmkzFOLaMkfQ48ClwJfAxMH2sEDPQDK4EbgP3A4sbJxXicXqgR0FuBA8D8rEP/cM1UNVTCsd0PPAF0Sdoq6bIhDpjGuzO7DHgZuAd4MvftDDakbVBmkKSjtt8HHgBuHCe4jKbilzXXAd8Ab0m6q+yponNbpgTt2eoBSX2SVtueVjbqBB54FYffKKkTOAYMAFNKzamAfWWwpCnAbGBaI2tM5Gld6Y8LpGZn5Ytj91XA1gn28vkc8JaHrZWkD4E9jULT6lJgtQf4sFxwTM/le4HtjmGsntD7gebxuqRvM605dtFdcAA9McZzLjjO4u91XatltG/gpnnFNFTPMSmTMimT8v/LP3oN170fxuN4AAAAAElFTkSuQmCC" alt="Contact Us" title="Contact Us" width="48" height="48"></a><a href="https://encrypted.google.com/search?&q=site%3Acalomel.org&btnG=Search"><img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADAAAAAwCAYAAABXAvmHAAAAAXNSR0IArs4c6QAAAAZiS0dEAP8A/wD/oL2nkwAAAAlwSFlzAAEQkAABEJABiazSuAAAAAd0SU1FB94DBhMpL3fcTWgAAAZoSURBVGjezZlNiJ3VHcZ/z1UmafyYqYiYEJtoS9tkEiNZ1GpgOllZmpgUWyhUaBBCm+oiFJpFu3hfz1l0JZIuQiqCBUWyCK12IBhasExiW7FFGRNDU1FiK6RQ0TgmM5MyzNNFzklObm9m7jufOXAZ7nvnnPP/eP7P/+MVXayqqogxAlDXdcv2rZJ6ba+TtMH2KknLbY9L+hA4CZy2PSppNIQwlfYSQmA+l2b6h3xpXdctYIfthyQNAOttI4n8N6/i+wnbw5KOhBCO5vOAeVNEXVp9K3AAuNv28subJXc45/KzQpEJ4D3g8RDCsQX3QGH1PtsHJH0/CeNizzhwAbho+6KkMeAmYFn63Gx7WfKQdcVFvwSeDCGcq6oKSXPyhqYRfjPwvO3+NsHfBYZsvwaMSDoTQnDhuRuBtZLuAwaAbcA9Jcxsvy5pTwhhpPT0nBUohB8EjgArbPsSWjQG7LE9lALTnfa2PWvZvk3So7b3Z9jZFnBW0ndCCH+ZFwi1Wf647c8lwQHeAHaGEP49W4vVdb0K+D3QXzz+ABgIIfxztgzV7oE+4DWgv2CY/THGn7QHdkPhs3F6gGdt/wDIcfGm7ftjjJOzUaJVUpvtA7b70+HY3i9p31yEz5RZVRUhhP/a3iXpd0l4A5slvZjpt7EHCutsBV4tMP9GCOH++aS8Nmo+YXtDIohx4OEQwqtNz2zlJGX7AFdAPwbszJfO14oxlt7eKekT27K9AthTVVVP/r0RhIAdwN1FEtozl4CdCU51XRNjfN/2QUmZXr8L3Nk4BpL1H5K0PAX1u5KG0iUsxCqEfN72xwVsdzX1ei7MBmznIBoCRue76LoGpE4DbxZZ+scJxt0rIKkXWJ82jgPH25PUQqxsZUnPFeyzsqqq1U2M1wLWFXXOBWCERVgFPIfafnqwUQwkKsvfL6bsuCgrUfiFZLjskY2NFJC0qih/Ly4GfP4vGUlnChjd1ZRGy/p+rMzMC70y1m2fz41RKU+3EBov0vhN89ktNfDALZl9bI819cCHRXe1LNXzixYDaX2pUOZfTWPgZMZf6qDWLJYCqcjrBXouXW9sjzT1wOnEQpJ0M3DfYghf5IFvZwMmOY41VWDU9onsAeAbVVW1FjEP/Cgb0PaZGONHjRUAhouedZuk2xYpBjYC63IRKengbMrpKUlHJE2kouoe4NGFpNOqqqjrWsBjQF8qIieAQ40VSMF01PZ7utRxA+yv63pl7qQWqKlZa/uJTOG2DwNnm97XKjY83lYJ/qGu654Y47wpkYWvquoG4JXMPpI+k/RsCGGyaQl/w/DwcG4wPhgcHPw88PU0iLoDWDM4OPhy7qSGh4fnLHxd1yskvQQ8UMybngkhPDObc69iG9tPAq/n+jxND14uO6l5sPxh299KgSvb/wgh7J1tzHWazG1K7l2ZOyXgpKSdIYT3pxtkTdPAy/ZaSa/Y/kph+Tx1/Lntp1NvMgWcCyF0NWbpOJmrquoBSYeANdlSwCfAQdsvxBj/3l4O5IvaL62qaqOkx4AnMuaz8G1K/DX15RPAUds/jTF+OpMS6lSbJLh8AXgJ2Fy4G0kfA28BzwFDIYTzHazfCzwC/FDSVxNVUsJG0pfbzr1qNG/7lKSNIYSp6YYLmiHR3Gj7RUnbba+4EhpWwVgXUhN0HrgF+CLQUwh1eTBs+zPg1zHGvXVd/wz4xTWuzntOAVvyJLuTEq0ZuqXJGOP3gIclHc4HF6OQXIKvB76WsmqmxvLdwQTwArAtxrg3KfM08LdpkOF07lBVVX3lTKmJB0pI9QB3ArvS9GBl6fI2NsvPzqS4OSTpbAhhMp9r+w5J7wC3d0E2p2xviDG6PSa6nl90CM7VwIOph70LWG57LNXzI8CxEMJHnVgpfb9d0lvA6hmuvgwn21tijFfBSbPh9Lm+VcnxBRwEdpdxci0lElMdt70jxnguG1Qs4aqq6lZJf07vDNylQd+xfW+Mcaqua1pLJXwqX0aBe5NQ6mK8bqBf0ttVVfUtuQeKxNkracj2QNv7uOng9Efb25dUgVKJuq57gT91C6fkrd2tpVYgF4khhE8bwglJ31xyBdqUmLK9RdKxAvMdHZDyzH/EdbS6hVNRL21qXU8KXAtObSwEMAnsA96+rjzQwRN9wG+BrW3W3xdCeGraYu468cQ5YLvt3cBvgF9J2gQ8lauC/wE4C/T6sUUVQgAAAABJRU5ErkJggg==" alt="Google Site Search" title="Google Site Search" width="48" height="48"></a><a href="https://calomel.org/calomel_rss.xml"><img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADAAAAAvCAYAAAClgknJAAAAAXNSR0IArs4c6QAAAAZiS0dEAP8A/wD/oL2nkwAAAAlwSFlzAAEQkAABEJABiazSuAAAAAd0SU1FB94DBhMbFVygwKsAAApCSURBVGjexVltcFTVGX6es5uYSA0gVT5SWio4Wqj9GG2npO5uRkGrU+2IU3Xajq1Dxxar45QBTZa9N2TvJlkM47RMHWtnrGMdB/UPoo5a+sW9gB+1rfYDFKVjUbBWoIiYEJbNffrn3nhzubuJGO35k7vZveec932f93mf9xwiGPl8Hul02gAgAJCUJESeVa1W0dvbi+iwLAuO4+D/NQgAhUKBJOeRvBnA5wGI5FuSjgB4k+QeSbskbSf5X5LHfN+vVqvVY319fSOTrVq1Cj09PR+tAYVCASTPNsb8CcCk8AtJIHncM4C3ATwt6WkAT5Lc7fv+wKFDhwbWrVsHALBtG5I+ksjQsiwD4AFjzDcBKIxKwlBgCGPG7QbwmKQNAF4EsN9xnAoAdHV1wff9D9UQ2radArCT5NwTeH/E4CBf/gzgPgC/8X3/1VKpdAQAuru7MTg4iDVr1nwoEUiR3ETyAkliBCvjtiL2nqQ9AO4G8Ei1Wv17b2/vsSDXUCqVJtSAVC6XA8nDAK4gma6D/5rwCjcf0BZJtpBsB7DYGHNyNpv9j+d5BzzPQ7lcxsKFC+F53sSxUECHN5K8AsBMkg2S0gAmkZwMoDHm8bpGRQwJP28D8AtJDzqOczRM9GKx+MENyOfzI9xeKBTmGmNmkmyUlAoMOEXSNAAzAcwDcCbJc2rlQh10HZT0EIA1xWLx1YmqIRwvNjs7O1PpdHoGyZkkPyHpXABZktlIVMaMiKRnJdmO42yaCCPGTNhaxhUKhSZjzGwAZwD4OslrAHw8Kanj0ZD0mqTbHcf5aeAcRAvihBqQNOL4tSxrCslPS7qS5A0kp9aISLSWvCPpbsdxloc1o7u7+6MxoFZ0Vq1a1ZxKpeYCuJHkDyJuH7VWGApJRwGsLxaL152oFOFEUFl8YcuymgBcQLKf5PykaERgVpH0QLFY/O6JsJOZCAPCzdu2DQBwHGfIcZzHJS2WdG+AewZsFa0dCij6asuyfg4A1Wr1/RWyiayKrusin89j0aJF2Lx5MzzPO+z7/sNz5sw5SvLLAJpiCR5K9zSA+dlsFj09PVts24bruuOD0MGDBzF16tTQg4xERQDg+76fxELjob9ojliWdS3JNSRnxJkqkhP7JX3fcZyNYY8S7z8Sc2Dp0qVobW2dBmAxgC8B8AG8BeANSTsBvEZyUFJl//79lTvvvPN9MVYorS3Luork7QBaA1glGbHT9/3LS6XSy+OuxMaYVmPMRpLn1vhdVdJzADYBeBTAawAOFItFf7zRCJOzUChcbYy5HcCsWKmI1okNlUrlmnK5XBlr7lR7e7sxxhRJXlGnAKVIzibZTvJ6kl8lOZDNZgfa2tre6e3tVbjJWth1XReWZaFUKm3PZrP7SGZJnhyRIUEARJKzU6nUkOu6T3meh1tuuQXbtm1LZqFgwxdHVWUNsYkIi7QBWE/ykcbGxmsty5oFAMViEZZl1fSW4ziwbRuO49wHYG1AoVF2YkC3HwPwPcuyzgOAU089dUwafTeiWcbMGQWD5GdJ3kPyDtu2s9FN1hq+7yMwtizp/gAyo9YI6sYCkks7OjpO6ujoqOmYVDabDaXxZZGEqiuXGYzwe5JnA/haLpcbzGQyOxzHOVYLTp7noVAowPM8ZDKZzSQvITkjCt9I0WtNpVIvuq77cq3+IeV5Htrb27cHBeU0AM0BLzPaLkYMilZwRpLvFACLSTZnMpnnHccZrGdEAKWj2Wx2B4DvkEwlROEUAEdyuZzruu7Rrq6u4+ZLBQlWvfDCC//g+/5fAPyO5GOSHgfwHIDdweTTg4kZi06YfGFBWkhyaiaTeaaeEWFSO46zO5fLtZBsi0Y8ktCfIvms67r/PP/887Fly5bRaBhLBdq2PY3kbElnA7iI5LcAnJTQegrvWTMM4A6S+e7u7oFaVBiubdv2LJJ/lNSaxCOSfhL0EIePg1CSdzo7O7F169bQU0dc133Tdd1/5HK5rQCeBDCF5GeikQhH4DZD8guSBl3X3RZCJr5W+DmTyQwAGDLGXBrLu/B5JoDfep73b8uyRvXT41KjCWpzGoBvkLyDZFNC/VAAt32Srncc5+Gx1rAsq9UY40k6Ix6FILg/qlQqd5XL5eH3rUYT1OaB4eHhX5Fsk7Sf0YPU9xYEgNNI3mRZ1pmhNkrSS8HYJ+muGFlE+4mLGhoapn8gOR3q9EKhgN7e3urkyZOf931/xIhw4QiUQPICkleuWLEinSQKw/8Fp3mPAhgMilu8il5McnrUkSfcD5RKJdi2jeXLl0PSKwDaIwuPMiJ45YfNzc1fDHVTrShIekvSo0kQItkE4HOFQiEVbXhOuB8IE3DLli3IZDIHALxkjLkqFJVhYgeRmALglUwm81ypVPLjCR0mZVtb21AqlWogeWWscIaP7wDY7HnekQnryAKB5htjnpD0y3j/GzkMXmqMmRuFYnyecrkskjskHYqrmmCO84wxk6JRM1G5O9ZGu7q6EgVacIA7KKkfwLvB4tH2UQDOArCgo6OjptALNnoAwFNxGAWf50tqieZOOrQm9Ipt200kT5fUTNKXdFjSPsdxhovFIjo6OlAulxNp1hjzuqSfkeyo4YdLGxsbfw/gYB2xd9AY8wKAS5JqgqR5nZ2dL/b19fkAYMK2b8mSJbAsazqAHwN4muRLAF4GsIHkkkKh0GTbNlpaWmrSbHd39wCAhxNEYPi8CEBLrWQOPDsQ3DMgaQ6S89PpdMNIDoShWLBgQQvJHpK9kmZFcPcVAA8aY75tWVY6n89j2bJlicUuWGAvgE01MPzJQDAmSosQ1yTfqKPs55BsGJXEtm2T5EKSSyNnOIj2rSRXkjwDAGbPnl0zCkFjvjVekCLP51iWla5Fz8EYILm3hhGtUfY0wUYbSV6WdAAVKSRnBeegyOfz9YrdUBIEImw0zxjTMMaFyWBQHJO+nhHm7ogBkijp9DgFJnhv8ooVK1iPqYLNHpA0FPdgENEZwdF9vTEUXCYmwXBSVHeFBgwHxyfxsMeT8M21a9eqXs8bjKMk367xs9PHKqAkqySP1nDAFElmlAFDQ0PHAGwAcCzssmJn+wDwa5K7oglbBwJVSUdqSYKx7uEk+ZKqNX7WHEWJue2229Df3w9JfwNwQ6x5D7G7U1Jp9erVe6MJW2ccArAnFtHw767AUfUa/0MA3og5MPz7LwAjB6hmz549IQNUAdwrKef7/j2Snie5TdJqSUt8399GMrEaJ3hwF4CN0f42ou/ur1QqA2OIxYOSnpA0ECZ/xKnrhoeHD40YsG7duhEZ0d3dfWxwcNCTdHNwzHg5gDWO4+wolUqyLGtclxCO4/gk75J0k6TXA+y+4Pv+Zb7vP9PX14f169fXJYJKpbJR0nWS/hq8v9f3/WWSHurp6ake15F1dXWhWq3WhMd477Kiv7MsKx1g3kga9n3/SKlU8se6l+vv78fKlStRKBRSxpgmkilJvu/7Q9u3b69u2LDhQ7lzHtWsj3VqXW/ceuutdTcXJZH/AXdJJ0ZQoIszAAAAAElFTkSuQmCC" alt="RSS Feed" title="RSS Feed" width="48" height="48"></a><a href="https://profiles.google.com/101623135720127712698?rel=author"><img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADAAAAAwCAYAAABXAvmHAAAAAXNSR0IArs4c6QAAAAZiS0dEAP8A/wD/oL2nkwAAAAlwSFlzAAEQkAABEJABiazSuAAAAAd0SU1FB94DBhMrNbiI1pAAAAa2SURBVGjexZp5bBVVFMZ/b7EggqFqIMa4gUZEjf1LFJU8lLgQMBIp/mEQqIEgIKkad42fRA0uUdwAkWo0cQE3NCEuQGyAuiRGGjWCiESxihilQKsofe3zD88k1+udedPSVyd5ecvMvfOdc7577nfOvAx9dEjKAUcCNcA4YBiQA1qADfb6RVKnpGhM2XmzFQYdvQ8HFgFfAO8BNwFXAJcD84AVwFZgqaRT0gCPjkwlwdvrAmCleR+gVObeO4FpktZEc/wvBpgRk4FXHeBp77cPGCVpSzkjMhX0/HTgWe9eSUaUvGu+AUZI6uqzNeCAn2zgMwHwXcBS4CJgLLAgAL4EnGzn+jwCEW1KgXttASZI+tYbMwT4EBjuRWQXcJykjopGwMk20yyj+As1A6wFzpH0retRi9gvwHXAH97UhwGjKkqhAG2yAdqsBaZI2uMvSufzO0CrZ3w/4IS+MGCi0aYj4PlmoFZSaxyfHaP2erTOAQMqbcBgYCAwAbgauBl4DSgC64CxkefLURAY5J3qBH5Pun++F7i/R9LL3u/9gGOBXZLa0mxIks4DBnup9i9gR68a4IHJAQMNcM48dgBol7QtcH3c+ukHzA9EYB+wqdco5NwwL6kWWAZ8AHwFbLc02QgskzQpyQDv9weBWi+FArwlqT0pevkegB9uHB8JVHmX9QeqgTOBqyQ1AxdJ2huXfSStAKYEdFIRuKFXxZyk8cDqlNomOr/bdM02zxGHAA8bdUJzTZK06qC1UEDbhDSN+z30+QXgGqDoeP4xAx8y+gHg1jQ1QT4l+FqgIUGQub9nAjvx0UCVpGIMbdzxSyXd2mv1gIFfGaNtfgIeAR6X1GG0uB64y/aG6FgA3A2Uo81SSdd2pyLLpKBNQ4yqXAPUSWoJjK8xCgwFmoCFlk2SaPOYpPrugA+mUQf8FAOfjQF/paQWSdWSzvLGNwNTgYnAnQZ+hQO+5IFfIqk+sDP3OAJTPFWJp20Klhqn2gLFss0lwKeSSs5cR9mirIuhzRJJc7rr+f8Y4Hh+BrA8hjZrgckGfpDtlK4n9wEvmZGdwEnAZcCpvUmb/2QhjzbLjTalGEm818YeH8gihwOzDXzJyXKlgDN6TJu4CIwzbifSxqNaC3BMynu59Fksaa7v+TSiL7iITR68EjAs8nzEed9bF5hxvobxgUfgO4CbXfAxsjo9hSRlgXrr28TSxvWOQ7mtksYAt9krkxDl1ZaFtgf00GjLWqNNS0X9ofXAs5I2x0UoYwX1F8AQL9SbgdGS9qSohSNJPBO42DoKWdPy64HnJX3vGZ8DCqZohwWi6DpjIzAH+FJSyTUkI+li4F2Po53ASZK+S6Pne6BoBwKPAzPKCEPfmPuBO9z5stZoJZCbvysH0JXEkmbGXJP1DBhoNUM58AS01e3AShdTRtIbwCRv4DhJ61J40tU2EZgNwNcWxU8lNTjG5owydTGKtuR5PRuTyR6UdEuUhUKK9EDKktIHnwHG2FoAeM4xFuvE1cWogP3m4fnAXODRmIiUgHqrockCPwQuHJfUArH3lQHwJSfPXxvoay5LSLd/AQ2SFkt6Gng9YTlVAbMlVeUt5HO8C+6yiXb5XpfU36lh3Rog8lDcJjUaOLEM53Nl6nV37HigOmtp7vfARU2SLvXAjwFetDYgXuWFaZu5MTvr1MCYP6wb1wrsscZvdBSBP+3cbqOYO7YaGJm3BuorVvK5PIuK91ZJbdanrHYKlVIabeMYcG7AmwtMsucM/G/O+c+ctuIBe6pzmzfH2Xl7JvWwheRoL0wDgEMDnvYXVhpJPDhQL7dJ+jUmWXSYc6Pv7YHdfWjWwrwFmG5yOOMBzQQ+u0AWHoSe7yzTaix7rSun35c0CnjbpEDSs6yobzMZeCulGPvZ2o3uHCdLOscWbBH4zGprrN6oMWp1eHIjOr7PunnaIjECuNfC1278K9p7G/AjsNhaimnBY8nCpWEWuNEebGy03fkI5/oaG7MR+ASYFaDwx/nA7tplafQee7hwvK2F/SbONkV87KYWajDAccd+73tXmRJ4J7A1n6Aui9ZRaEpSod0QcpslNVk2Cu0FJU8BdJSZchXQWtHHrAEjzgA+T4jAIqNtp3F+VkK78kJJzZk+BB9R4D7L53HtSFK0LOdJekrSv7buih6NjY00NjZSKBTWAacDp1H+qX0ohc+X9CRAoVCo7H8lEtZOLfCQ8Tzjyem4YzcwV9IT7hrs0zUQSBTnWxtmvO3UcfvHm8AzkjYd1POBChlUZfn/VEvbkZzZAXzEP/9iaZXUFcp+fwNe3piy+IgH6wAAAABJRU5ErkJggg==" alt="Google+" title="Google+" width="48" height="48"></a></div></body></html> |