Tuesday, July 12, 2011

RAID5/6 (using ZFS) in FreeBSD 8.x


Ok, FreeBSD still lacks a decent RAID5 implementation within its core system (some people use the geom_raid5 3rd party module that you can find in freenas) – but with ZFS moved into production status in freebsd 8 now we can use this.
ZFS supports various raid levels.  We will use RAID5 in this example – I’ll explain how to use RAID6 later in the article.
Ok, for my example I will use 6 x 2TB hard drives freshly installed in my system (listed as ad10 ad12 ad14 ad16 ad18 ad20 in dmesg) to generate a RAID5 raid set, giving 5 x 2TB of usable space and capable of a single disk failure without loss of data.  Remember, you need a minimum of 3 disks to do RAID5, and you get N-1 capacity (N-2 for RAID6)
First, we need to load ZFS into the system… add the following into your /boot/loader.conf:
vfs.zfs.prefetch_disable=”1″
zfs_load=”YES”
This will cause ZFS to load in the kernel during each boot.  The prefetch_disable is set by default on servers with less than 4GB of ram, but it’s safe to add it anyway.  I’ve found this produces far more stable results in live systems so go with it ;-)
Next, add the following into your /etc/rc.conf file:
zfs_enable=”YES”
This will re-mount any ZFS filesystems on every boot, and setup any necessary settings on each boot.
Now, we will add all 6 disks into a raid5 set called ‘datastore’ – run the following as root:
zpool create datastore raidz ad10 ad12 ad14 ad16 ad18 ad20
‘raidz’ is ZFS’s name for RAID5 – to do RAID6 you would use ‘raidz2′ instead.  You can confirm the command was successful with zpool status as follows:
pool: datastore
state: ONLINE
scrub: none
config:
NAME        STATE     READ WRITE CKSUM
datastore   ONLINE       0     0     0
raidz1    ONLINE       0     0     0
ad10    ONLINE       0     0     0
ad12    ONLINE       0     0     0
ad14    ONLINE       0     0     0
ad16    ONLINE       0     0     0
ad18    ONLINE       0     0     0
ad20    ONLINE       0     0     0
errors: No known data errors
This shows the raid set is online and healthy.  When there are problems, it will drop to DEGRADED state.  If you have too many disk failures, it will show FAULTED and the entire array is lost (in our example we would need to lose 2 disks to cause this, or 3 in a RAID6 setup)
Now we will set the pool to auto-recover when a disk is replaced, run the following as root:
zpool set autoreplace=on datastore
This will cause the array to auto-readd when you replace a disk in the same physical location (e.g. if ad16 fails and you replace it with a new disk, it will re-add the disk to the pool)
You will now notice that you have a /datastore folder with the entire storage available to it.  you can confirm this with zfs list as follows:
NAME             USED  AVAIL  REFER  MOUNTPOINT
datastore       2.63T  6.26T  29.9K  /datastore
You now have a working RAID5 (or RAID6) software raid setup in FreeBSD.
Generally to setup RAID6 instead of RAID5 you replace the word raidz with raidz2.  RAID5 allows for a single disk failure without data loss, RAID6 allows for a double disk failure without data loss.
After a disk failure, run zpool status to ensure the state is set to ONLINE for all the disks in the array then run the command zpool scrub datastore to make zfs rebuild the array.  Rebuilding takes time (it rebuilds based on used data so the more full your array the longer the rebuild time!) – once it’s completed the scrub or “resilver” process, your array will return back to ONLINE status and be fully protected against disk failures once again.
As this process can take (literally) hours to complete some people prefer a RAID6 setup to allow for a 2nd disk failure during those few hours.  This is a decision you should make based on the importance of the data you will store on the array!

LINK URL: https://www.dan.me.uk/blog/2010/02/07/raid56-using-zfs-in-freebsd-8-x/

Sunday, May 29, 2011

Thinkpad TrackPoint Scrolling in Ubuntu (Maverick) 10.10 - IBM T60


My fellow citizens, our long national nightmare of having to use bad pointing devices is over.
Starting with Ubuntu Maverick/10.10, configuring scrolling with the middle button and the TrackPoint is now really easy:
  • Install gpointing-device-settings (sudo aptitude install gpointing-device-settings)
  • Run gpointing-device-settings &, enable the Use wheel emulation using button 2, and enable both vertical and horizontal scrolling.


(gpointing-device-settings is not new but Maverick is the first time it has really worked well for me.)


Saturday, April 23, 2011

Thursday, December 2, 2010

How to Flush DNS


How to Flush DNS in Microsoft Windows

In Microsoft Windows, you can use the command ipconfig /flushdns to flush the DNS resolver cache. Open the command prompt and type the following:
C:>ipconfig /flushdns
Windows IP Configuration
Successfully flushed the DNS Resolver Cache.
 The above command will completely flush the DNS, deleting any incorrect entries too. You can also use the command ipconfig /displaydns to view the DNS resolver cache.

Turning Off DNS Caching under Microsoft Windows

If you experience frequent issues with DNS caching under Microsoft Windows, you can disable client-side DNS caching with either of these two commands:
net stop dnscache
sc servername stop dnscache
This will disable DNS caching until the next reboot. To make the change permanent, use the Service Controller tool or the Services tool to set the DNS Client service startup type to Disabled. You can permanently disable DNS Client by following the below steps:
  • Goto Start and click on Run.
  • Type Services.msc in the Run command box.
  • A window listing all the services will popup. Search for a service called DNS Client.
  • Double click on the listed DNS Client service and click Stop. Similarly, you can restart it by clicking Start.

Tuning DNS Caching under Microsoft Windows

You can modify the behavior of the Microsoft Windows DNS caching algorithm by setting two registry entries in the HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesDnscacheParameters registry key.
The MaxCacheTtl represents the maximum time that the results of a DNS lookup will be cached. The default value is 86,400 seconds. If you set this value to 1, DNS entries will only be cashed for a single second.
MaxNegativeCacheTtl represents the maximim time that the results of a failed DNS lookup will be cached. The default value is 900 seconds. If you set this value to 0, failed DNS lookups will not be cached.

Flush DNS in Mac OSX

In Mac OSX Leopard, you can use the command dscacheutil -flushcache to flush the DNS resolver cache:
bash-2.05a$ dscacheutil -flushcache
In Mac OSX versions 10.5.1 and before, the command lookupd -flushcache performed the same task:
bash-2.05a$ lookupd -flushcache

Flush DNS in Linux

In Linux, the nscd daemon manages the DNS cache. To flush the DNS cache, restart the nscd daemon. To restart the nscd daemon, use the command `/etc/init.d/nscd restart`.

Linux and nf_conntrack.

When you have messages like this :


Dec  2 13:12:28 VivaLAN kernel: [8768983.190310] nf_conntrack: table full, dropping packet.
Dec  2 13:14:32 VivaLAN kernel: [8769110.070735] nf_conntrack: table full, dropping packet.
Dec  2 13:14:32 VivaLAN kernel: [8769110.082320] nf_conntrack: table full, dropping packet.
Dec  2 13:14:32 VivaLAN kernel: [8769110.082320] nf_conntrack: table full, dropping packet.
Dec  2 13:14:32 VivaLAN kernel: [8769110.086320] nf_conntrack: table full, dropping packet.
Dec  2 13:14:32 VivaLAN kernel: [8769110.089848] nf_conntrack: table full, dropping packet.
Dec  2 13:14:32 VivaLAN kernel: [8769110.094892] nf_conntrack: table full, dropping packet.
Dec  2 13:14:32 VivaLAN kernel: [8769110.099703] nf_conntrack: table full, dropping packet.
Dec  2 13:14:32 VivaLAN kernel: [8769110.119987] nf_conntrack: table full, dropping packet.
Dec  2 13:14:32 VivaLAN kernel: [8769110.127979] nf_conntrack: table full, dropping packet.
Dec  2 13:14:32 VivaLAN kernel: [8769110.138279] nf_conntrack: table full, dropping packet.












you can disable the conntrack : 




iptables -t raw -I OUTPUT -j NOTRACK
iptables -t raw -I PREROUTING -j NOTRACK

The other way when you do not want to disable full contrack you can increase the connections
#cat /proc/sys/net/ipv4/netfilter/ip_conntrack_max
65432
#
# echo "130864" > /proc/sys/net/ipv4/netfilter/ip_conntrack_max