Friday, November 19, 2010

Xymon 4.3.0 beta-3 available

New beta of Xymon monitoring system was released.

Download in from Sourceforge

Many thanks to Henrik and the team!!!

There are plenty of improvements since previous release (and even from beta2), so upgrade could be tricky, but it worth it.


From Release Notes:
Xymon 4.3.0 is the first release with new features after the 4.2.0 release. Several large changes have been made throughout the entire codebase. Some highlights (see the Changes file for a longer list): * Data going into graphs can now be used to alter a status, e.g. to trigger an alert from the response time of a network service. * Tasks in xymonlaunch can be be configured to run at specific time of day using a cron-style syntax for when they must run. * Worker modules (RRD, client-data parsers etc) can operate on remote hosts from the xymond daemon, for load-sharing. * Support for defining holidays as non-working days in alerts and SLA calculations. * Hosts which appear on multiple pages in the web display can use any page they are on in the alerting rules and elsewhere. * Various new network tests: SOAP-over-HTTP, HTTP tests with session cookies, SSL minimum encryption strength test. * New "compact" status display can group multiple statuses into one on the webpage display. * Configurable periods for graphs on the trends page. * RRD files can be configured to maintain more data and/or different data (e.g. MAX/MIN values) * SLA calculations now include the number of outages in addition to the down-time.

Several minor issues with beta3 are already addressed in svn repository and in posts on the mailing list, but generally it works very stable ( I've tested it on FreeBSD and Opensolaris platforms)

Sunday, October 31, 2010

Monitoring - alternatives to Nagios

Last week I've attended LSPE meetup,
Topic - Monitoring

Interesting talks.

I was really surprised that a lot of people don't like Nagios, but still using it and in many cases alternatives were not even considered ...
For the ones who'd like to check other solutions - I'd recommend to start with
Xymon and OpenNMS

Xymon - Very fast deployment, agents available for any UNIX platform and win32. Scalable to thousands hosts. Easy to write or customize test scripts.

OpenNMS Perfect solution for network/SNMP monitoring. Definitely needs more tuning for advanced options or for testing non-standard services.


Both could be deployed within the same environment and efficiently complement each other.

Wednesday, October 13, 2010

NexentaStor - iSCSI example

NexentaStor community edition has some limitations on the iSCSI configurations (GUI or CLI interface)

Most of them can be addressed without switching to alternative solution or to the Enterprise edition, especially if the storage is already in use and downtime better be avoided.

Here is an example on the setup of iSCSI volume shared between 2 hosts.
It can be used for clustered file system or as a volume for Oracle database (ready for a RAC implementation).
Obviously, you can apply similar rules for different scenarios and network topologies.

Storage network topology:


In my setup both Host1 and Host2 are running Solaris 10.
Two fully independent switches were used to provide redundancy and additional throughput.


Workflow:
Configure iSCSI initiator on both hosts:
  • Host1
    • Initiator node name: iqn.1986-03.com.sun:XXX-f29d
  • Host2
    • Initiator node name: iqn.1986-03.com.sun:XXX-e792
On the storage server nexenta-ce ( in GUI or CLI )
  • Create target point group ( if it is still not created)
    • TPG1 (10.1.1.1:3260, 10.1.2.1:3260)
  • Create volume
    • zVol ( data/shaedvolume10 )
  • create 2 targets
    • iqn.1986-03.com.sun:XXX-fef7
    • iqn.1986-03.com.sun:XXX-07ee
Nexenta-ce(root shell):


Keep in mind that it is not recommended by Nexenta

nmc@nexenta-ce:/$ option expert_mode =1 -s nmc@nexenta-ce:/$ !bash root@nexenta-ce#
Targets must be offline when you are creating target group:
root@nexenta-ce# stmfadm create-tg tg10 root@nexenta-ce# stmfadm offline-target iqn.1986-03.com.sun:XXX-07ee root@nexenta-ce# stmfadm offline-target iqn.1986-03.com.sun:XXX-fef7 root@nexenta-ce# stmfadm add-tg-member -g tg10 iqn.1986-03.com.sun:XXX-07ee root@nexenta-ce# stmfadm add-tg-member -g tg10 iqn.1986-03.com.sun:XXX-fef7 root@nexenta-ce# stmfadm online-target iqn.1986-03.com.sun:XXX-07ee root@nexenta-ce# stmfadm online-target iqn.1986-03.com.sun:XXX-fef7
Let's create a host group which includes both host1 and host2:
root@nexenta-ce# stmfadm create-hg host1 root@nexenta-ce# stmfadm create-hg host2 root@nexenta-ce# stmfadm add-hg-member -g host1 iqn.1986-03.com.sun:XXX-f29d root@nexenta-ce# stmfadm add-hg-member -g host2 iqn.1986-03.com.sun:XXX-e792 root@nexenta-ce# sbdadm create-lu /dev/zvol/rdsk/data/tg10 root@nexenta-ce# sbdadm list-lu GUID DATA SIZE SOURCE --------------------------- ------------------- ---------------- 600XXXXXXX0001 214748364800 /dev/zvol/rdsk/data/tg10 root@nexenta-ce# stmfadm add-view -t tg10 -h host1 600XXXXXXX0001 root@nexenta-ce# stmfadm add-view -t tg10 -h host2 600XXXXXXX0001

Both host1 and host2 (root shell)
In current setup I'm using only static discovery.

root@host1# iscsiadm modify discovery -s enable root@host1# iscsiadm add static-config iqn.1986-03.com.sun:XXX-fef7,10.1.1.1 root@host1# iscsiadm add static-config iqn.1986-03.com.sun:XXX-07ee,10.1.2.1 root@host1# iscsiadm list target Target: iqn.1986-03.com.sun:XXX-07ee .... Target: iqn.1986-03.com.sun:XXX-fef7 .... root@host1# devfsadm -Cv root@host1# echo | format

Result - the created volume is ready for operations on both hosts plus high availability on the network layer.