Tuesday, 31 March 2009

Obtain reads/writes information for the non-default key caches

While there isn't a simple SQL command to pass to the mysql client and obtain this information, you can use the following command to write the information to the mysql error log:

shell> mysqladmin debug

Everytime you execute this command a new entry will be added to the log file that includes information on the individual key caches. Here is a example of how to create a indepedent cache and the output of the debugging entry:

mysql> SET GLOBAL tcache.key_buffer_size=2048*1024;
Query OK, 0 rows affected (0.00 sec)

mysql> CACHE INDEX t IN tcache;
+--------+--------------------+----------+----------+
| Table | Op | Msg_type | Msg_text |
+--------+--------------------+----------+----------+
| test.t | assign_to_keycache | status | OK |
+--------+--------------------+----------+----------+
1 row in set (0.01 sec)


======= error log =============
Key caches:
default
Buffer_size: 8384512
Block_size: 1024
Division_limit: 100
Age_limit: 300
blocks used: 3
not flushed: 0
w_requests: 5
writes: 1
r_requests: 15
reads: 3

tcache
Buffer_size: 2097152
Block_size: 1024
Division_limit: 100
Age_limit: 300
blocks used: 0
not flushed: 0
w_requests: 0
writes: 0
r_requests: 0
reads: 0

===============================

Now if we insert some entries into the table and read then and execute "mysqladmin debug" again, we can see the updated stats:


======= error log =============
mysql> INSERT INTO t (id) VALUES (0),(1),(2),(3),(4);
Query OK, 5 rows affected (0.01 sec)
Records: 5 Duplicates: 0 Warnings: 0
Key caches:
default
Buffer_size: 8384512
Block_size: 1024
Division_limit: 100
Age_limit: 300
blocks used: 3
not flushed: 0
w_requests: 5
writes: 1
r_requests: 15
reads: 3

tcache
Buffer_size: 2097152
Block_size: 1024
Division_limit: 100
Age_limit: 300
blocks used: 1
not flushed: 0
w_requests: 5
writes: 1
r_requests: 4
reads: 0
===============================

ZFS Replication: HA backup

A small introduction of ZFS - ZFS is a transactional file system, which means that the file system state is always consistent on disk. With a transactional file system, data is managed using copy on write semantics. Data is never overwritten, and any sequence of operations is either entirely committed or entirely ignored. In addition, synchronous data (written using the O_DSYNC flag) is always guaranteed to be written before returning, so it is never lost.
http://docs.sun.com/app/docs/doc/819-5461/zfsover-2?a=view

Steps taken to build the ZFS replication:

1. Create a ZFS pool to store the data directory in. The command here will vary, there are several methods to create a pool. You can use a file, partition, slice or whole disk. The only resource I have available was a file. I created the file first:


shell>mkfile 1G /export/home/data


2. Then create a ZFS pool from the file:



shell>zpool create datadir /export/home/data


Now there is a folder in the root named 'datadir'. You can create from a partition, slice or whole drive in a similar manner by indicating the drive name as it is listed in /dev/dsk:


3. Once the 'datadir' pool is created we can now copy our data directory to the datadir pool. Shutdown the MySQL server and check the error log to ensure a clean shutdown

4. Add a datadir entry to you my.cnf file that points to the new folder:



in my.cnf

[mysqld]

datadir = /datadir

Now we have our data directory inside a ZFS pool. Now we need to send it to another pool on another machine likely for HA backup. Use steps 1-3 to create another pool on another machine. For example we will have 'SlaveA' machine with a pool named 'slavepool'.


5. We will need to snaphot the data directory in zfs to be able to send it to SlaveA.



shell >zfs snapshot datadir@snap1



This creates snapshot 'snap1' of the datadir pool.



You can verify the snapshots with the list command: 'zfs list -t snapshot'

6. Next we will need to send the snapshot to the slaveA server and apply it to the 'slavepool' pool:



shell >zfs send datadir@snap1 |ssh user_name@SlaveA pfexec zfs recv -F slavepool



This will 'send' the datadir snapshot 'datadir@snap1' over ssh to SlaveA server which then uses the stream of data to 'recv' receive it into the 'slavepool' pool.



7. At this point we now have a snapshot of the data directory in the slavepool on SlaveA. To allow us to write updates to it we need to make the slavepool readonly so no data or metadata can be changed by the OS.



On SlaveA server:



shell >zfs set readonly=on slavepool


8. Start the MySQL server back up and verify that it is using the new data directory by checking the global variables:



mysql>SHOW GLOBAL VARIABLES LIKE '%datadir%';



+---------------+------------------------+

| Variable_name | Value |

+---------------+------------------------+

| datadir | /datadir/ |

+---------------+------------------------+

At this point you will need to create a shell script that will snapshot the datadir and send the incremental snapshots to the slave.



Our original snapshot is datadir@snap1. Now we take another snapshot and call it datadir@snap2. We can then apply the changes in the file system to the SlaveA by sending the incremental changes to its slavepool:



Take the snapshot:



shell >zfs snapshot datadir@snap2



Then send the incremental (-i) change to the slave:



shell >zfs send -i datadir@snap1 datadir@snap2 |ssh user_name@SlaveA pfexec zfs recv slavepool


The above to steps will need to be handled in a scheduled script to update the SlaveA server so that its snapshot is current. Here you will need to test frequency to determine what is a good balance of load and staying current with your snapshots.







If you need to switch to the Slave server remember you will need to disable the read only we set earlier:



$>zfs set readonly=off slavepool


Then you can start your SlaveA MySQL server. It shoud go through crash recovery on the InnoDB tablespace. You may need to perform a repiar table on MyISAM tables. It is best you use an automatic crash recovery engine like InnoDB for this configuration.

Monday, 30 March 2009

Solaris- InnoDB Plugin From Source

Since there isn't any binary innodb plugin available for Solaris10. So, I decided to share my experience of building innodb plugin from source on Solaris 10.


Assumption

1. Readers are assumed to have at least some basic Unix/Linux administrative skills.
2. Readers are assumed to have running copy of Solaris 10.
3. Readers are assumed to have backed up any critical data before the installation.

1. 1. Get the latest innodb plugin
http://www.innodb.com/innodb_plugin/download/

2. Get the latest MySQL 5.1 sources from
http://dev.mysql.com/downloads/mysql/5.1.html#source

3. Replace the contents of the mysql-5.1.N/storage/innobase/ directory
with the contents of this directory.
shell> cd mysql-5.1.N/storage
shell> mv innobase innobase-default
shell> tar xzf /path/to/innodb_plugin-1.0.N.tar.gz
shell> mv innodb_plugin-1.0.N innobase

4. Compile MySQL using the following configure line.
Note: * This uses Sun Studio Express (not gcc)
* --prefix=/opt/mysql/5.1.N is used, but you can change the
directories if you like. Also --localstatedir and --libexecdir
* Maybe it's good to change the --with-comment and
--with-server-suffix to say that this is your build using
the InnoDB Plugin.

5. Add the flags

CC=/path/to/SunStudioExpress/bin/cc

CXX=/path/to/SunStudioExpress/bin/CC

CFLAGS="-g -xO2 -mt -fsimple=1 -ftrap=%none -nofstore -xbuiltin=%all -xlibmil -xlibmopt -xtarget=generic"

CXXFLAGS="-g -xO2 -mt -fsimple=1 -ftrap=%none -nofstore -xbuiltin=%all -features=no%except -xlibmil -xlibmopt -xtarget=generic"

6. export CC CXX CFLAGS CXXFLAGS

7. Optionally
make clean

8.

./configure --prefix=$PREFIX --localstatedir=$PREFIX/data \ --libexecdir=$PREFIX/bin \ --with-comment="MySQL Server with InnoDB Plugin" \ --with-server-suffix="-innodb-plugin" \ --enable-thread-safe-client --enable-local-infile \ --with-mysqld-libs=-lmtmalloc \ --with-pic --with-client-ldflags="-static" \ --with-mysqld-ldflags="-static" --with-zlib-dir=bundled \ --with-big-tables --with-ssl \ --with-plugins=partition,archive,blackhole,csv,federated,heap,innobase,myisam \ --with-extra-charsets=complex

9. Make

10. Make install.
On MySQL DBA blog, I put my thoughts and experiences as a Senior DBA. My responsibility is managing one of the largest and most active MySQL on Linux installations. Topics include MySQL scalability, Myisam/InnoDB/MySQL performance tuning, disaster recovery, database security, replication, and general information about database management. E-mail me at akhangd at hotmail.co dot uk for my resume.