tag:blogger.com,1999:blog-72200382396327956742024-03-12T17:37:33.628-07:00AbrisTechAbris: hand sketch, contour map.<br>
Notes of a System Administrator.Alex Levinhttp://www.blogger.com/profile/07550199995528194444noreply@blogger.comBlogger9125tag:blogger.com,1999:blog-7220038239632795674.post-90748859220614968812011-09-30T22:13:00.000-07:002012-04-04T09:02:22.570-07:00Solaris10 u10 - lucreate failures<div dir="ltr" style="text-align: left;" trbidi="on">
After several successful upgrades to the currently released Solaris10 u10 I've got an issue:<br />
lucreate failed to prepare alternative boot environment with an error like:
<br />
<pre>...
Mounting ABE .
ERROR: mount: /zones/myzone-dataset1/legacy: No such file or directory
ERROR: cannot mount mount point
...
</pre>
<br />
and several warnings like:<br />
<pre>WARNING: Directory zone lies on a filesystem shared between BEs, remapping path to .</pre>
<div>
<br />
Hmm , OK I don't need these filesystems mounted, but can safely mount them (at least temporary)
if it will solve the problem. New mountpoint is set and lucreate successfully finished with warnings only.<br />
The box is not in critical environment and warnings were ignored - <b>BIG MISTAKE</b> -
</div>
<br />
<span style="color: red;">!!! <b>Do not ignore WARNINGS during lucreate</b> !!!</span><br />
<br />
But anyway, upgrade finished successfully, new BE activated, init 6 ...<br />
<br />
Server started, but two zones failed to start ... (there are other zones on the server that booted without issues)<br />
Attempt to boot affected zone resulted in multiple complains about filesystems that are not "legacy" mounted in global zone ...<br />
Hmm ...
Looking at <i>zonecfg -z myzone export</i> info and see bunch of
<br />
<pre> add fs
set dir=....
</pre>
additionally to correctly defined
<br />
<pre> add dataset
set name=...
</pre>
Fixing zone config by removing all fs records that shouldn't be there ,
another attempt to boot to figure out that system is trying to boot zone from zonepath=/zones/myzone-sol10u10 ( instead of /zones/myzone )<br />
<br />
Checking real status of filesystems and fixing zoneconfig again,<br />
but "<b>Zone myzone already installed; set zonepath not allowed.</b>"<br />
<br />
Not allowed but can be done by editing /etc/zones/myzone.xml and /etc/zones/index ( Don't forget to backup current files ... )<br />
<br />
It looks much better now- all zones are up and running ...<br />
<br />
But lucreate is still broken and failing on attempt to create new BE.<br />
Looks like a bug in live upgrade. Search shows the same issue in <a href="http://unix.ittoolbox.com/groups/technical-functional/solaris-l/lucreate-fails-solaris-10-4417408">this thread</a> . Currently there are no updates for patches 121431(x86) and 121430(sparc), double checking and filing the bug.<br />
<br />
<b><span style="color: red;">Update:</span></b><br />
After a long conversation with oracle I was able to confirm that there is a bug in the current LU suite ( Patch 121431-67 ). Solution is simple - downgrade LU to 121431-58.<br />
In case the old version of LU is not backed up - just install the original one from the Solaris media.<br />
<br />
<br /></div>Alex Levinhttp://www.blogger.com/profile/07550199995528194444noreply@blogger.com2tag:blogger.com,1999:blog-7220038239632795674.post-53660746909917394132011-09-20T20:59:00.000-07:002011-09-29T22:14:06.557-07:00Dell PERC controllers and Solaris<div dir="ltr" style="text-align: left;" trbidi="on">
By default Solaris doesn't include tools for monitoring and management of Dell RAID adapters, but most of this card ( PERC H700, 6/i ... ) are re-branded LSI controllers.<br />
Even if the adapter is used in a materialistic config ( almost JBOD ) and RAID functionality is delegated to ZFS I'd prefer to have at least some visibility on the state of the card ( battery, memory ... )<br />
<br />
Solaris 10 is using mega_sas ( LSI ) drivers, so for configuration , monitoring, etc ... you can safely use MegaCli utility which can be downloaded form <a href="http://www.lsi.com/downloads/Public/MegaRAID%20Common%20Files/8.33-01_Solaris_MSM.zip">LSI support</a> site.<br />
<br />
Not sure if it will be officially supported by Dell or Oracle, but it works - personally tested on H700 and 6i - just make sure that you are running it using root privileges.<br />
<br />
As a monitoring tool - <a href="http://www.it-eckert.de/final.html#raid-monitor">raid-monitor</a> can be used with <a href="http://www.xymon.com/">Xymon</a> . It generates an alert if current state differs from generated "good" reference-file.<br />
<br /></div></div>
Alex Levinhttp://www.blogger.com/profile/07550199995528194444noreply@blogger.com0tag:blogger.com,1999:blog-7220038239632795674.post-66005251332919754942011-09-16T11:09:00.000-07:002011-10-22T21:58:05.081-07:00Solaris 10 8/11 is released<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
Solaris 10 u10 ( 8/11 ) is <a href="http://www.oracle.com/technetwork/server-storage/solaris/overview/solaris-latest-version-170418.html">released</a> and available for <a href="http://www.oracle.com/technetwork/server-storage/solaris/downloads/index.html">download</a><br />
Notes on the upgrade ( from u9):
<br />
<br />
<pre><div style="color: green;">
bash-3.00# lofiadm -a /export/home/iso/sol-10-u10-ga2-x86-dvd.iso
/dev/lofi/1
bash-3.00# mount -F hsfs /dev/lofi/1 /mnt/
bash-3.00# /mnt/Solaris_10/Tools/Installers/liveupgrade20 # If upgrading from old solaris and liveupgrade 2.0 is not installed
bash-3.00# lucreate -n sol10u10
bash-3.00# echo "auto_reg=disable" > /tmp/sysidcfg
bash-3.00# luupgrade -u -n sol10u10 -s /mnt -k /tmp/sysidcfg
...
...
INFORMATION: The file on boot
environment <sol10u10> contains a log of the upgrade operation.
INFORMATION: The file on boot
environment <sol10u10> contains a log of cleanup operations required.
INFORMATION: Review the files listed above. Remember that all of the files
are located on boot environment <sol10u10>. Before you activate boot
environment <sol10u10>, determine if any additional system maintenance is
required or if additional media of the software distribution must be
installed.
The Solaris upgrade of the boot environment <sol10u10> is complete.
...
bash-3.00# lumount sol10u10 /a
</sol10u10></sol10u10></sol10u10></sol10u10></sol10u10></div>
</pre>
Review log files: /a/var/sadm/system/logs/upgrade_log and
/var/sadm/system/data/upgrade_cleanup<br />
<br />
<pre><div style="color: green;">
bash-3.00# luumount /a
bash-3.00# luactivate sol10u10
bash-3.00# init 6
</div>
</pre>
<br />
Review results of the upgrade
<br />
<br />
<pre><div style="color: green;">
bash-3.2# lustatus | egrep sol10u10\|Name\|Env\|--
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
sol10u10 yes yes yes no -
bash-3.2# uname -svr
SunOS 5.10 Generic_147441-01
</div>
</pre>
<br />
If everything looks OK ( and in my case there were no issues ) proceed with upgrade of zpool and zfs and in case of mirrored boot pool - don't forget to update grub on the second disk.;
<br />
<br />
Changes since u9 ( zpool v22 , zfs v4 ):
<br />
<pre>zpool:
VER DESCRIPTION
--- --------------------------------------------------------
23 Slim ZIL
24 System attributes
25 Improved scrub stats
26 Improved snapshot deletion performance
27 Improved snapshot creation performance
28 Multiple vdev replacements
29 RAID-Z/mirror hybrid allocator
bash-3.2# zfs upgrade -v
The following filesystem versions are supported:
zfs
VER DESCRIPTION
--- --------------------------------------------------------
5 System attributes
</pre>
</div>
<div style="color: red;">
<b>!!! If system zpool is upgraded to the new version - there will be no way to boot into the old environment !!!</b>
</div>
<div>
<br />
<hr />
<b>Update:<br />
There are potential issues when upgrading host with multiple zones, see details in the <a href="http://www.abristech.net/2011/09/solaris10-u10-lucreate-failures.html">post</a></b>
<br />
<hr />
</div>
</div>Alex Levinhttp://www.blogger.com/profile/07550199995528194444noreply@blogger.com0tag:blogger.com,1999:blog-7220038239632795674.post-77952378541879536942011-09-13T00:16:00.000-07:002011-09-28T22:22:40.941-07:00Xymon - monitoring from the cloud<div dir="ltr" style="text-align: left;" trbidi="on">
I this post I'm going to deploy <a href="http://xymon.com/">Xymon</a> in amazon cloud ( <a href="http://aws.amazon.com/">AWS</a> ) for off-site monitoring.<br />
<br />
Running “external” monitor in the cloud is efficient alternative for third-party services ( search for "External Website Monitoring" ). Easy installation, small footprint (no database) and flexibility of <a href="http://xymon.com/">xymon</a> makes it very attractive instrument for such project.<br />
<br />
Bellow is just a some notes for a minimal setup:<br />
<br />
Logon to AWS and launch the smallest instance ( t1.micro ) using Basic 32-bit Amazon Linux AMI. Make sure that SSH, HTTP ( or/and HTTPs ) connections are permitted in the security group. <br />
<br />
<pre><div style="color: green;">
ssh -i YourKey.pem ec2-user@ec2-XX.compute-1.amazonaws.com
[ec2-user@mon ~]$ sudo -i
[root@mon ~]# yum -y update
…
( reboot if needed )
...
[root@mon ~]# useradd -m xymon
</div>
</pre>
<br />
Now let’s add all packages we need for the build <br />
<br />
<pre><div style="color: green;">
[root@mon ~]# yum -y install subversion fping gcc gcc-c++ openssl-devel make \<br />
binutils rrdtool rrdtool-devel pcre-devel httpd cyrus-sasl-devel \<br />
ncurses-devel
</div>
</pre>
<br />
Get source code from repository ( or download archive from <a href="http://xymon.com/">xymon.com </a>).<br />
<br />
<pre><div style="color: green;">
[root@mon ~]# mkdir src
[root@mon ~]# cd src
[root@mon ~]# svn co https://xymon.svn.sourceforge.net/svnroot/xymon/branches/4.3.5
[root@mon ~]# cd 4.3.5
[root@mon ~]# ./configure
...
I found fping in /usr/sbin/fping
Do you want to use it [Y/n] ?
Y
…
Do you want to be able to test SSL-enabled services (y) ?
Y
…
What group-ID does your webserver use [nobody] ?
apache
…
[root@mon ~]# make && make install
</div>
</pre>
<br />
OK, application is installed and can be started, but currently it will be checking localhost only and reporting to log files.<br />
<br />
Let's prepare front-end - Apache web server.<br />
<br />
Create a web user and restrict access to /xymon/<br />
<br />
<pre><div style="color: green;">
[root@mon ~]# htpasswd -c /etc/httpd/xymonpasswd admin
[root@mon ~]# cp ~xymon/server/etc/xymon-apache.conf
/etc/httpd/conf.d/xymon-apache.conf
[root@mon ~]# sed -i 's/\/home\/xymon\/server\/etc\/xymonpasswd/\/etc\/httpd\/xymonpasswd/g' /etc/httpd/conf.d/xymon-apache.conf
[root@mon ~]# sed -i 's/AuthGroupFile/#AuthGroupFile/g' /etc/httpd/conf.d/xymon-apache.conf
</div>
</pre>
<br />
Review /etc/httpd/conf.d/xymon-apache.conf ( and httpd.conf ) files, and start/restart apache service;<br />
sudo to xymon and add monitoring targets to ~/server/etc/hosts.cfg ( read <a href="http://www.xymon.com/xymon/help/manpages/man5/hosts.cfg.5.html">manpage</a> )<br />
<br />
as an example we cat test some Google sites, in future connection to google.com could be used as a "always up" service. Adding dependency allows avoid noise form hiccups on AWS network.<br />
<br />
<br />
<pre><div style="color: green;">
group-compress Web services
0.0.0.0 www.google.com # http://www.google.com
0.0.0.0 encrypted.google.com # https://encrypted.google.com/
group-compress DNS
8.8.8.8 google-public-dns-a.google.com # dns=A:www.google.com,MX:google.com
group-compress Local
127.0.0.1 localhost # bbd http://localhost/
</div>
</pre>
<br />
More sophisticated examples are available on <a href="http://xymon.com/">http://xymon.com</a><br />
<br />
Now start xymon server ( as user xymon )<br />
~/server/xymon.sh start and check your page <b>http://ec2-XX.compute-1.amazonaws.com/xymon/</b><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPCzs-Mzzy3l9LywxVH5yZtBwhmN7bswH_t62i2HylnAPSbqoN50xClUg74O_oP1FB6r8tjYEJN_C5W50gCjDe_LESD7h5zpw_carnXTBC9ARE17mySqsVg4shKKGp9Ew-o5DdRaMPwE5v/s1600/xymon.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="222" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPCzs-Mzzy3l9LywxVH5yZtBwhmN7bswH_t62i2HylnAPSbqoN50xClUg74O_oP1FB6r8tjYEJN_C5W50gCjDe_LESD7h5zpw_carnXTBC9ARE17mySqsVg4shKKGp9Ew-o5DdRaMPwE5v/s400/xymon.png" width="400" /></a></div>
<br />
<br />
<br />
Now - tricky part.<br />
Web interface is nice, trends, etc …, but what about alert notifications ?<br />
It's easy to add a record in ~xymon/server/etc/alerts.cfg, but most likely e-mails from AWS host will be delivered to a spam folder …<br />
One solution - use <a href="http://aws.amazon.com/ses/">amazon email service</a>,<br />
another - use any public e-mail provider who support smtp authorization.<br />
For example:<br />
- create new mail account on mail.google.com<br />
- compile mutt on your virtual server ( the one from aws yum repositories won’t work ... )<br />
-- get recent source from http://www.mutt.org/download.html <br />
<pre><div style="color: green;">
./configure --enable-imap --enable-smtp --with-sasl --with-ssl &&; make &&; make install </div>
</pre>
<br />
create .muttrc in ~xymon with following contents: <br />
<br />
<pre><div style="color: green;">
# SENDING MAIL
set copy=yes
set smtp_url="smtp://NEW.EMAIL@smtp.gmail.com:587/"
set smtp_pass="EMAIL.PASS"
set from="NEW.EMAIL@gmail.com"
set realname="Xymon in the Cloud"
# RECEIVING MAIL
set imap_user = "NEW.EMAIL@gmail.com"
set imap_pass = "EMAIL.PASS"
set folder = "imaps://imap.gmail.com:993"
set spoolfile="imaps://imap.gmail.com/INBOX"
set postponed="imaps://imap.gmail.com/Drafts"
set record="imaps://imap.gmail.com/Sent"
set message_cachedir=~/.mutt/cache/bodies
set certificate_file=~/.mutt/certificates
set move = no
</div>
</pre>
<br />
Verify that it really works: <br />
<br />
<pre><div style="color: green;">
date | mutt -s test you_real_address@provider.com
</div>
</pre>
And create a script for alert notifications like: <br />
<br />
<pre><div style="color: green;">
[xymon@mon ~]# cat ~xymon/bin/m.sh
#!/bin/bash
if [ ${RECOVERED} = 1 ]
then
export BBCOLORLEVEL="RECOVERED"
export BBCOLOR="green"
else
export BBCOLOR=$BBCOLORLEVEL
fi
S=$BBHOSTSVC:$BBCOLOR
echo $BBALPHAMSG | mutt -s $S $RCPT
</div>
</pre>
Finally, add alert rules ( ~xymon/server/etc/alerts.cfg ) <br />
<br />
<pre><div style="color: green;">
HOST=* COLOR=red
SCRIPT /home/xymon/bin/m.sh you_real_address@provider.com FORMAT=TEXT REPEAT=3h RECOVERED
</div>
</pre>
<b>DONE</b><br />
<b><br />
</b><br />
Long story, many steps, but in reality should take less then an hour to have basic monitoring running.<br />
Operational cost of this setup will be definitely lower then comparable services from “remote site monitoring” providers.<br />
<br />
<i>PS</i>. Before real usage, don't forget switch to HTTPs, review all config files ...<br />
Subscribe to the Xymon mailing list ( http://xymon.com/xymon/help/known-issues.html ) for friendly support, ask for help and give help to others.<br />
<br /></div>
Alex Levinhttp://www.blogger.com/profile/07550199995528194444noreply@blogger.com1tag:blogger.com,1999:blog-7220038239632795674.post-45825499139574905472011-09-10T23:47:00.000-07:002011-09-13T00:11:58.465-07:00Using xymon to monitor status of cfengine<div dir="ltr" style="text-align: left;" trbidi="on"><br />
During the deployment of the cfengine one of my biggest concerns was how to make sure that it is working as expected. Obviously there are multiple elements in the engine itself that can alert or even better - fix many issues. <br />
<br />
Free version doesn't provide reporting, trends ... - no visualization but it has enough to build external analyzers and reporting, also external tests will give you a bit more confidence that everything is OK (or something is wrong)<br />
<br />
As a main monitoring platform I'm using xymon and it's functionality can be easy extended.<br />
<br />
Initially I'd like to ensure that all agents are alive and really talking to the server(s)<br />
In my case I'm expecting that connection is established approximately every 5 min (default behavior), so you should expect that "last seen" value is less than 5 min + <i>"splaytime"</i>.<br />
<br />
Code of the extension is available for download from <a href="http://code.google.com/p/abris/">google code page</a><br />
<br />
Or checkout most current version from svn:<br />
<pre>svn co https://abris.googlecode.com/svn/trunk/xymon-ext/cfengine
</pre><br />
<b>requirements</b>:<br />
<ul style="text-align: left;"><li>cfengine3</li>
<li>python 2.6+ </li>
</ul><br />
<hr />Example of a healthy chart, looking from the cfserver:<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiU_y2Z_wEz2r7P1vkH9lZKqNQWkafjDzZY-DlPqEkwIjohPrxGYHarYjUyMvUl1bmK4P-3fkcmEF9hPZIJp-bau1abSYAYrbZgAVh_QCdY1GjjaYY__I5EiwDJ8-2QSGFXo54pAMfJRGId/s1600/cf-good.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="180" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiU_y2Z_wEz2r7P1vkH9lZKqNQWkafjDzZY-DlPqEkwIjohPrxGYHarYjUyMvUl1bmK4P-3fkcmEF9hPZIJp-bau1abSYAYrbZgAVh_QCdY1GjjaYY__I5EiwDJ8-2QSGFXo54pAMfJRGId/s400/cf-good.png" width="400" /></a></div><div><br />
Significant spikes in the chart indicate that you need to check status of the suspicions agent<br />
<br />
I'm planning to add more features on this test , so stay tuned.<br />
<br />
</div></div>
Alex Levinhttp://www.blogger.com/profile/07550199995528194444noreply@blogger.com0tag:blogger.com,1999:blog-7220038239632795674.post-68795865094083468952010-11-19T10:02:00.000-08:002010-11-19T10:02:25.939-08:00Xymon 4.3.0 beta-3 availableNew beta of <a href="http://xymon.com/">Xymon</a> monitoring system was released.<br />
<br />
Download in from <a href="http://sourceforge.net/projects/xymon">Sourceforge</a><br />
<br />
<b>Many thanks to Henrik and the team!!! </b><br />
<br />
There are plenty of improvements since previous release (and even from beta2), so upgrade could be tricky, but it worth it.<br />
<br />
<br />
From Release Notes:<br />
<pre><div style="color: green;">Xymon 4.3.0 is the first release with new features after
the 4.2.0 release. Several large changes have been made
throughout the entire codebase. Some highlights (see
the Changes file for a longer list):
* Data going into graphs can now be used to alter a status,
e.g. to trigger an alert from the response time of a network
service.
* Tasks in xymonlaunch can be be configured to run at specific
time of day using a cron-style syntax for when they must run.
* Worker modules (RRD, client-data parsers etc) can operate on
remote hosts from the xymond daemon, for load-sharing.
* Support for defining holidays as non-working days in alerts and
SLA calculations.
* Hosts which appear on multiple pages in the web display can
use any page they are on in the alerting rules and elsewhere.
* Various new network tests: SOAP-over-HTTP, HTTP tests with
session cookies, SSL minimum encryption strength test.
* New "compact" status display can group multiple statuses into
one on the webpage display.
* Configurable periods for graphs on the trends page.
* RRD files can be configured to maintain more data and/or
different data (e.g. MAX/MIN values)
* SLA calculations now include the number of outages in addition
to the down-time.
</div></pre><br />
Several minor issues with beta3 are already addressed in svn repository and in posts on the mailing list, but generally it works very stable ( I've tested it on FreeBSD and Opensolaris platforms)Alex Levinhttp://www.blogger.com/profile/07550199995528194444noreply@blogger.com0tag:blogger.com,1999:blog-7220038239632795674.post-19363471493270628652010-10-31T23:27:00.000-07:002011-08-24T16:36:36.262-07:00Monitoring - alternatives to NagiosLast week I've attended <a href="http://www.meetup.com/SF-Bay-Area-Large-Scale-Production-Engineering">LSPE meetup</a>, <br />
Topic - <b>Monitoring</b><br />
<br />
Interesting talks.<br />
<br />
I was really surprised that a lot of people don't like Nagios, but still using it and in many cases alternatives were not even considered ...<br />
For the ones who'd like to check other solutions - I'd recommend to start with<br />
<a href="http://www.xymon.com/">Xymon</a> and <a href="http://www.opennms.org/">OpenNMS</a> <br />
<br />
<a href="http://www.xymon.com/">Xymon</a> - Very fast deployment, agents available for any UNIX platform and win32. Scalable to thousands hosts. Easy to write or customize test scripts.<br />
<br />
<a href="http://www.opennms.org/">OpenNMS</a> Perfect solution for network/SNMP monitoring. Definitely needs more tuning for advanced options or for testing non-standard services.<br />
<br />
<br />
Both could be deployed within the same environment and efficiently complement each other.Alex Levinhttp://www.blogger.com/profile/07550199995528194444noreply@blogger.com0tag:blogger.com,1999:blog-7220038239632795674.post-71859780381741359502010-10-13T23:11:00.000-07:002011-09-13T00:12:21.306-07:00NexentaStor - iSCSI example<a href="http://nexentastor.org/">NexentaStor</a> community edition has some limitations on the iSCSI configurations (GUI or CLI interface)<br />
<br />
Most of them can be addressed without switching to alternative solution or to the <a href="http://nexenta.com/">Enterprise edition</a>, especially if the storage is already in use and downtime better be avoided.<br />
<br />
Here is an example on the setup of iSCSI volume shared between 2 hosts. <br />
It can be used for clustered file system or as a volume for Oracle database (ready for a RAC implementation).<br />
Obviously, you can apply similar rules for different scenarios and network topologies.<br />
<br />
Storage network topology:<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVqO2QeEVOZlYU16NYczfBX9F84mhMEI3_HyJJhisXOhq4LHM_uvx4NAzMHLJqIfjWBP9LRuNqe04DG2XpIHKwl1fc9YzE5kUVNyaSyoYPGNnHgrUbrcKsbBepILm8QDOoZHoiOAIrG2xG/s1600/iSCSI_example.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVqO2QeEVOZlYU16NYczfBX9F84mhMEI3_HyJJhisXOhq4LHM_uvx4NAzMHLJqIfjWBP9LRuNqe04DG2XpIHKwl1fc9YzE5kUVNyaSyoYPGNnHgrUbrcKsbBepILm8QDOoZHoiOAIrG2xG/s1600/iSCSI_example.png" /></a></div><br />
In my setup both Host1 and Host2 are running Solaris 10.<br />
Two fully independent switches were used to provide redundancy and additional throughput.<br />
<br />
<br />
Workflow:<br />
Configure iSCSI initiator on both hosts:<br />
<ul><li>Host1</li>
<ul><li>Initiator node name: iqn.1986-03.com.sun:XXX-f29d</li>
</ul><li>Host2</li>
<ul><li>Initiator node name: iqn.1986-03.com.sun:XXX-e792</li>
</ul></ul>On the storage server <b>nexenta-ce</b> ( in GUI or CLI ) <br />
<ul><li> Create target point group ( if it is still not created)</li>
<ul><li>TPG1 (10.1.1.1:3260, 10.1.2.1:3260)</li>
</ul><li>Create volume </li>
<ul><li>zVol ( data/shaedvolume10 )</li>
</ul><li>create 2 targets</li>
<ul><li>iqn.1986-03.com.sun:XXX-fef7</li>
<li>iqn.1986-03.com.sun:XXX-07ee</li>
</ul></ul><b>Nexenta-ce</b>(root shell):<br />
<br />
<br />
<div style="color: yellow;">Keep in mind that it is not recommended by Nexenta </div><br />
<pre><div style="color: green;">nmc@nexenta-ce:/$ option expert_mode =1 -s
nmc@nexenta-ce:/$ !bash
root@nexenta-ce#
</div></pre>Targets must be offline when you are creating target group:<br />
<pre><div style="color: green;">root@nexenta-ce# stmfadm create-tg tg10
root@nexenta-ce# stmfadm offline-target iqn.1986-03.com.sun:XXX-07ee
root@nexenta-ce# stmfadm offline-target iqn.1986-03.com.sun:XXX-fef7
root@nexenta-ce# stmfadm add-tg-member -g tg10 iqn.1986-03.com.sun:XXX-07ee
root@nexenta-ce# stmfadm add-tg-member -g tg10 iqn.1986-03.com.sun:XXX-fef7
root@nexenta-ce# stmfadm online-target iqn.1986-03.com.sun:XXX-07ee
root@nexenta-ce# stmfadm online-target iqn.1986-03.com.sun:XXX-fef7
</div></pre>Let's create a host group which includes both host1 and host2:<br />
<pre><div style="color: green;">root@nexenta-ce# stmfadm create-hg host1
root@nexenta-ce# stmfadm create-hg host2
root@nexenta-ce# stmfadm add-hg-member -g host1 iqn.1986-03.com.sun:XXX-f29d
root@nexenta-ce# stmfadm add-hg-member -g host2 iqn.1986-03.com.sun:XXX-e792
root@nexenta-ce# sbdadm create-lu /dev/zvol/rdsk/data/tg10
root@nexenta-ce# sbdadm list-lu
GUID DATA SIZE SOURCE
--------------------------- ------------------- ----------------
600XXXXXXX0001 214748364800 /dev/zvol/rdsk/data/tg10
root@nexenta-ce# stmfadm add-view -t tg10 -h host1 600XXXXXXX0001
root@nexenta-ce# stmfadm add-view -t tg10 -h host2 600XXXXXXX0001
</div></pre><br />
<b>Both host1 and host2 </b> (root shell)<br />
In current setup I'm using only static discovery.<br />
<br />
<pre><div style="color: green;">root@host1# iscsiadm modify discovery -s enable
root@host1# iscsiadm add static-config iqn.1986-03.com.sun:XXX-fef7,10.1.1.1
root@host1# iscsiadm add static-config iqn.1986-03.com.sun:XXX-07ee,10.1.2.1
root@host1# iscsiadm list target
Target: iqn.1986-03.com.sun:XXX-07ee
....
Target: iqn.1986-03.com.sun:XXX-fef7
....
root@host1# devfsadm -Cv
root@host1# echo | format
</div></pre><br />
Result - the created volume is ready for operations on both hosts plus high availability on the network layer.Alex Levinhttp://www.blogger.com/profile/07550199995528194444noreply@blogger.com0tag:blogger.com,1999:blog-7220038239632795674.post-6628591805754338412007-10-01T22:51:00.001-07:002011-08-31T20:12:42.063-07:00Fog<img alt="" border="0" id="BLOGGER_PHOTO_ID_5116635741881911282" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXi506WLarxuOtW02B64tqfvn126whTYBiPMQU3bok1bx59mTO3H_A0-sJQI5TVY9rT1a3y1Y-kZFN18su6PEQcn0vfbSyltU9qOzyqPmhX6zTmdsuRZVHzm060RoIXAsuNMEquursZG4c/s640/GoldenGate.jpg" style="float: left; margin: 0pt 10px 10px 0pt;" />Alex Levinhttp://www.blogger.com/profile/07550199995528194444noreply@blogger.com0