For most linux users, “dd” is mostly used when dealing with disk issues, such as copying one disk to another (byte for byte) creating an ISO from a CD/DVD, and so on. I personally didnt know what else I would use dd for until I ran across a particular need….
I needed my linux script to read data from a bluetooth sensor (this of course applies to other I/O devices like USB, etc). Normally, using the /dev filesystem it’s quite easy to read and write to devices. However, this is true only when the data being read from the sensor has a defined start and end. For example, in my case, normally reading from a bluetooth sensor would involve a simple command like “cat /dev/rfcomm0“. The bluetooth sensor connects, sends data, and disconnects. The disconnect is a measurable “end” to the data… most of the time the disconnect action deletes the /dev/rfcomm0 device and so the “cat” command exits and the script can continue processing.
But what happens if the data sensor sends a continuous data stream? In this case there is no “end” event, and the cat command would continuing on spewing output, but would never stop and pass control over to the next command in the script. This creates a problem to the script author… how to read a discrete set of data from the sensor, then pass on to the next command?
That’s where DD came into the picture. Rather than using “cat /dev/rfcomm0″ we could use something similar to “dd if=/dev/rfcomm0 bs=8 count=1″. This will read 8 bytes (bs=8) once rather than infinitely (count=1) and exit, passing control to the next script command. You then place the “dd” command along with any output processing commands within a while loop to continuously process output till some flag is set or some other user interaction.
PS, by default DD will output a record count and some other diagnostic data along with the output. In this scenario this is unwanted data and to get rid of it you simply need to redirect stderr to dev null like so:
dd if=/dev/rfcomm0 bs=8 count=1 2> /dev/null
Objective: Having a baremetal install of Redhat Enterprise on a blade within the IBM Bladecenter, configure the OS to successfully mount a LUN on the SAN (Storwise v7000) as usable storage.
- Blades: IBM Bladecenter 88524TG
- SAN storage: FiberChannel Storwise V7000
- HBAs: Qlogic
I assume all cabling and SAN zoning has already been done in this article.
1. In this example we install redhat enterprise 5.8 directly on the blade unit (baremetal – not using a hypervisor such as VMWare). Simply download the appropriate ISO, boot from the CD and install RedHat as per usual installation procedure. Customize installation as appropriate for your enviornment
2. Reboot the blade. When asked, press F1 to enter BIOS setup. Login to the V7000 and add a new host (hosts > hosts > add new host > add port definition). Add the blade’s two HBAs WWN to the newly created host.
3. Back on the blade, exit the BIOS and continue booting the RedHat OS. The IBM storwise will mark the host as being “offline” but this is simply due to us not having yet presented any LUNs to the RedHat OS. (Thanks to Isaac Zarb for pointing this out)
4. Once Redhat is booted, make sure the Qlogic drivers (qla2xxx at the time of writing) are present and the QLogic HBAs are detected properly. You may use the following:
- lspci: within the output, you should see the QLogic HBAs listed. While they may appear here, this does not necessarily mean they can be used (i.e. functional drivers), but simply that the hardware has been correctly detected and identified by redhat
- dmesg | grep -i qla2xxx: this should return some output. If so, then the drivers and kernel module have been correctly loaded and the HBAs are ready to use. If no outpu is present here, search google using “qlogic linux super installer“. This should direct you the QLogic support site, which offers an archive with an all-in-one installer that automatically detects, installs and configures the appropriate drivers.
- lsmod | grep -i qla: like the above, this should also return some output meaning the modules have been correctly loaded. If not, install the drivers as mentioned in the previous point, and reboot the server.
5. Enable multipathing. At the time of writing, multipathing and persistent naming are properly implemented in the qlogic drivers. However, redhat by default disabled multipathing. In most IBM blade to storwise setups, there will be 4 paths to any given LUN on the SAN. So, multipathing should be enabled for increased throughput and tolerance. To do so, open your favored text editor and edit the /etc/multipath.conf file. By default, RedHat blacklists all devices from multipathing by using the following syntax:
Change the above to:
Where “sda” is you local drive is such exists.
6. Ensure the multipath daemon is set to startup on every reboot. Use the chkconfig multipathd –level 3 on command for this (given that your normal runlevel is “3″)
7. Back on the storwise V7000, create a volume, and assign it to the redhat host which we created previously. The host will probably still be marked as “offline” but the IBM Storwise still allows you to map volumes to the offline host
8. Restart redhat by issuing a shutdown -r now. After the reboot, issue the command multipath -ll. This should show the 4 paths mentioned earlier. Change directory to /dev/mapper/ where you should see several device files named along the lines of /dev/mapper/multipath0p1. The number of device files present depends on the amount of volumes (LUNs) presented to the blade from the Storwise. Please note these device files will only be present if the directive user_friendly_names yes is present in the multipath.conf file.
9. Use the /dev/mapper/multipath0p1 or similar device files as you would any other harddisk device file. You can use familiar commands such as fdisk, mkfs, mount, du and so on.
Since there a 4 multipaths to a given SAN volume, once all the above is done you can observe that other than sda (the local hardisk) there are another 4 logical drives eg: sdb, sdc, sdd,sde. All these represent a single LUN / volume on the SAN. If you use fdisk to create a partition on the /dev/mapper/multipath device file, this will be reflected in the above since each drive is assigned a partition number as well, i.e. sdb1, sdc1, sdd1, sde1. And similarly for all other device files.
This document provides a short description of the most widely used Clavister (click here for more information) console commands from experience. Note: for more information about any of the commands listed below, please type in help [command]. The below commands apply to Clavister CorePlus v8.9.x
This command starts up the packet capture mechanism on the clavister. It provides filtering using wireshark-like expressions (eg source and destination IP) as well as filtering by interface and so on. This command is especially useful when troubleshooting connctivity issues, such as suspected ACL or site to site VPN issues.
This command only applies in high-availability environments. Simply typing in “ha” will return the HA status of the current unit (active/passive) as well as whether the peer unit is reachable or not. It will also display the time since this unit has been active (if any).
Another two forms of the command:
allow you to handover “master” (active) control to the peer, or vice versa.
- ipsectunnels and ipsecstats
These two commands allow you to check whether a particular vpn tunnel is up or not. The former command is a generic one, giving a quick overview of the current VPN tunnels. The latter command shows slightly more detail, and also allows filtering by remote peer IP.
This command will kill any IPSec connections to a particular remote peer IP. This comes handy when a tunnel de-syncronisation occures, that is, if the tunnel does not use keepalives (example due to incompatibilities with different vendors), one side of the tunnel is up and the other side is down. In order to start over, the “killsa” command can be used
This command is immensely useful when troubleshooting IPSec vpn negotiation issues. It is very similar to the “debug ike” / “debug ipsec” in cisco units, but presents the information in a more user-friendly format.
It will help highlight mistakes int eh VPN configuration such as mismatches, PSK problems, and so on.
I recently had a project where I used a combination of nagios / livecheck and Apache’s CGI to create a lightweight Nagios Dashboard for the NOC team. It was quite a simple process, much aided by the fact that the CGI gateway allows you to use any programming language, including my favorite bash.
The project took off smoothly, with apache running CGI bash scripts, which in turn piped livecheck queries to nagios via “unixcat” (a livecheck binary which pipes queries into a unix socket where the livecheck module lives). To present this in an easier way:
client request —-> apache cgi —–> bash script ——> unixcat ——> livecheck unix socket ——-> nagios.
Livecheck makes an excellent job of providing an amazingly fast, easy and simple API to the current nagios check states. It is clearly a much superior solution to using nagios + mysql. The only problem I ran across was that on one of our servers, the “unixcat” program was refusing to close, causing a very large number of unixcat processes to populate the CPU and hog resources. This meant the server went under a massive load (about 30 from “top” readings). After some hunting, I discovered that I had neglected to set the suid permission on the CGI scripts in my /var/www/cgi-bin directory. After using the chmod +s command to set the SUID attribute on the scripts, the system correctly started closing off the finished unixcat programs and freeing up resources. The load immediately fell and the dashboard was much snappier when loading.
Further Reading Links
An intro to apache CGI can be found here: http://httpd.apache.org/docs/2.0/howto/cgi.html
The livecheck homepage: http://mathias-kettner.de/checkmk_livecheck.html
In order to keep this blog post a bit more relevant, there have been some improvements since that post was written. Squid v3.2 has been released earlier this year, making ssl interception more seamless and easier. The new features for HTTPS interception can be found while reading through the man page for http_port:
1. The “transparent” keyword has been changed to “intercept“:
intercept Rename of old 'transparent' option to indicate proper functionality.
INTERCEPT is now better described as:
intercept Support for IP-Layer interception of outgoing requests without browser settings. NP: disables authentication and IPv6 on the port.
2. In order to avoid more certificate errors when intercepting HTTPS sites, squid now can dynamically generate SSL certificates, using generate-host-certificates. This means the CN of the certificate should now match that of the origin server, though the certificate will still be generated using SQUID’s private key:
SSL Bump Mode Options: In addition to these options ssl-bump requires TLS/SSL options. generate-host-certificates[=<on|off>] Dynamically create SSL server certificates for the destination hosts of bumped CONNECT requests.When enabled, the cert and key options are used to sign generated certificates. Otherwise generated certificate will be selfsigned. If there is a CA certificate lifetime of the generated certificate equals lifetime of the CA certificate. If generated certificate is selfsigned lifetime is three years. This option is enabled by default when ssl-bump is used. See the ssl-bump option above for more information.
Looks like the above is an offshoot of the excellent work here: http://wiki.squid-cache.org/Features/DynamicSslCert
Make sure to use the above two features for smoother HTTPS interception – though remember, always warn users that SSL traffic is being decrypted, privacy is a highly-valued right…