Thursday 24 December 2009

INSTALLATION OF POSTGIS/POSTGRESQL ON CENTOS 5

Install PROJ
If you are missing proj based on above or running a version below 4.5, then install by following these steps.
wget http://download.osgeo.org/proj/proj-4.6.0.tar.gz
tar xvzf proj-4.6.0.tar.gz
cd proj-4.6.0
./configure && make clean && make
make install
ldconfig
cd ..
install GEOS
If you are missing geos based on above or running a version below 3.0, then install by following these steps.
wget http://download.osgeo.org/geos/geos-3.0.3.tar.bz2
tar xvjf geos-3.0.3.tar.bz2
cd geos-3.0.3
./configure && make clean && make
make install
ldconfig
cd ..

Install Postgresql 8.3.8
Exclude Postgresql from the CentOS base repository as Postgresql 8.3 is not included in the base repository.
echo "exclude=postgresql* >> " /etc/yum.repos.d/CentOS-Base.repo
Download and install the RPMs from http://yum.pgsqlrpms.org:
wget http://yum.pgsqlrpms.org/reporpms/8.3/pgdg-centos-8.3-6.noarch.rpm
rpm -ivh pgdg-centos-8.3-6.noarch.rpm
yum install postgresql postgresql-server postgresql-contrib postgresql-devel

Add:
/usr/local/pgsql/lib
/usr/local/lib
to "/etc/ld.so.conf" and run "ldconfig".

Retrieve the PostGIS source archive from http://postgis.refractions.net/download/postgis-1.4.0.tar.gz. Uncompress and untar the archive.
# gzip -d -c postgis-1.4.0.tar.gz | tar xvf -
Then run:
# ./configure
If you want support for coordinate reprojection, you must have the Proj4 library installed. If ./configure didn't find it, try using --with-proj=PATH switch specify a specific Proj4 installation directory.
If you want to use GEOS functionality, you must have the GEOS library installed. Geos 3.0.3+ is preferred and is required for some functions such as ST_SimplifyPreserveTopology to be available. If ./configure didn't find it, try using --with-geos=PATH to specify the full path to the geos-config program full path.
• Run the compile and install commands.
# make
# make install
All files are installed using information provided by pg_config
Libraries are installed [pkglibdir]/lib/contrib.
Important support files such as lwpostgis.sql are installed in [prefix]/share/contrib.
Loader and dumper binaries are installed in [bindir]/.
PostGIS requires the PL/pgSQL procedural language extension. Before loading the lwpostgis.sql file, you must first enable PL/pgSQL. You should use the createlang command. The PostgreSQL Programmer's Guide has the details if you want to this manually for some reason.
# createlang plpgsql template_postgis
Now load the PostGIS object and function definitions into your database by loading the lwpostgis.sql definitions file.
# psql -Upostgres -d template_postgis -f /usr/share/pgsql/contrib/postgis.sql
The PostGIS server extensions are now loaded and ready to use.
For a complete set of EPSG coordinate system definition identifiers, you can also load the spatial_ref_sys.sql definitions file and populate the SPATIAL_REF_SYS table.
# psql -Upostgres -d template_postgis -f /usr/share/pgsql/contrib/spatial_ref_sys.sql

Wednesday 23 December 2009

Master Slave Setup

Assumptions:

1. Master Host IP is: 192.168.16.41
2. Slave Host IP is: 192.168.16.52

--Preparing the Master:

1. Create Replication User on Master
a. Logon to Master database server and issue following command from MySQL prompt.
$ mysql –uroot –p
mysql> GRANT REPLICATION SLAVE ON *.*
-> TO 'repl'@’192.168.16.52' IDENTIFIED BY 'slavepass';
mysql> flush privileges;
mysql> quit;
2. Update my.cnf file using your favourite editor (vi or nano).
3. Add following lines under [mysqld] section:

log-bin=mysql-bin
server-id=1

4. Restart Master database server
$ /etc/init.d/mysql restart

5. Get the master status information
a. Logon to MySQL and run following commands
$ mysql –uroot –p
mysql> FLUSH TABLES WITH READ LOCK;
b. Leave this session running until you have taken the data snapshot

6. Create a data snapshot [using mysqldump tool]
a. In another ssh session use mysqldump to create a snapshot of all the databases. For example:
$ mysqldump –uroot –p --all-databases --master-data=1 > fulldump.db

7. In the session/window where you acquired the read lock, release the lock:
mysql> UNLOCK TABLES;
8. Copy the backup file i.e. fulldump.db onto the replication database server] using scp/rsync:
$ scp fulldump.db root@192.168.16.52:

--Preparing the Slave:

1. Update my.cnf file using your favourite editor (vi or nano).
a) Add following lines under [mysqld] section

server-id=2
master-host=192.168.16.41
master-user=repl
master-password = slavepass
relay-log=localhost-relay-bin
relay-log-index=localhost-relay-bin.index

b) Restart database server
$ /etc/init.d/mysql restart
c) Restore backup from file
$ mysql –uroot –p < fulldump.db
2. Start Replication Slave
mysql> START SLAVE;
3. Monitor Replication Status
$ mysql> show slave status\G
4. The slave is catching up Master database if the Slave_IO_Running and Slave_SQL_Running values are both Yes

Monday 21 December 2009

Installing PostgreSQL Standby Server

Assumptions:

1. password less authentication is enabled between Primary and Standby database servers.
2. rsync is installed on both servers
3. vhosts file (i.e. /etc/vhosts) contain entry for ‘standby’ server
4. pg_standby script is installed onto standby server
5. This path exists on 'standby' server i.e. /backup/psql_wal. And postgres user has permissions to read/write
6. PostgreSQL 8.3.8 is installed on both primary and standby servers.

Primary Database Setup:

--Edit postgresql.config file. Set the following parameters

archive_command = '/usr/bin/rsync -a %p standby:/backup/psql_wal/%f </dev/null'
archive_timeout = 120

--Restart postgres to apply chnages:

su postgres
pg_ctl -D /data/pgdata restart

--Start hot backup:
echo "SELECT pg_start_backup('mybackup');" | psql -U postgres

--Backup data directory using tar

tar -cvf /tmp/backup.tar /data/pgdata

--Copy backup onto standby server:

rsync /tmp/backup.tar standby:/backup

--Stop hotbackup, clear the flag
echo "SELECT pg_stop_backup();" | psql -U postgres

Standby Database Setup:

--Stop postgres:
su postgres
pg_ctl -D /data/pgdata stop

--Restore the backup:
rm -fR /data/pgdata/*
tar -xvf /tmp/backup.tar /data/pgdata
rm -fR /data/pgdata/pg_xlog/*
mkdir /data/pgdata/pg_xlog/archive_dir

--Create recovery file: /data/pgdata/recovery.conf
--add following line:
restore_command = 'pg_standby -l -d -s 2 -t /tmp/pgsql.trigger.5432 /backup/psql_wal/ %f %p %r 2>>standby.log'

--Correct permissions:
chown -R postgres:postgres /data/pgdata

--Start postgres on standby server:
su postgres
pg_ctl -D /data/pgdata -l /data/pgdata/pg.log start

Monitor Standby Server:
tail –f /data/pgdata/standby.log

Using MySQL with memcached

Installing memcached on Centos 5 (x86_64)

$ wget http://apt.sw.be/redhat/el5/en/x86_64/rpmforge/RPMS/rpmforge-release-0.3.6-1.el5.rf.x86_64.rpm

--Install a new RPM server

$ rpm -Uvh rpmforge-release-0.3.6-1.el5.rf.x86_64.rpm

--Now we can simply use yum to install Memcached:
$ yum install memcached

--Starting memcached . Note memcached runs on port 11211

$ memcached -d -u nobody -l 127.0.0.1

--install libmemcached

$ wget http://download.tangent.org/libmemcached-0.35.tar.gz
$ tar -zxf libmemcached-0.35.tar.gz
$ cd libmemcached-0.35
$ ./configure --prefix=/usr/local/memcached
$ make clean && gmake
$ gmake install

--Add following lines into /etc/ld.so.conf

/usr/local/lib
/usr/lib
/usr/lib64/mysql
/usr/local/memcached/lib
/usr/lib64/mysql/lib

-- Creates the necessary links and cache to the most recent shared libraries

$ ldconfig

-- install memcached UDFs
$ wget http://download.tangent.org/memcached_functions_mysql-0.9.tar.gz
$ tar -zxf memcached_functions_mysql-0.9.tar.gz
$ cd memcached_functions_mysql-0.9
$ ./configure --with-mysql=/usr/bin/mysql_config --libdir=/usr/lib64/mysql --with-libmemached=/usr/local/memcached/lib
$ make clean && gmake
$ gmake install

-- Once this is done, initialize memcached functions with MySQL.

shell# cd sql
shell# echo install_functions.sql | mysql

-- Connect to MySQL

mysql> SELECT memc_servers_set('127.0.0.1:11211');
mysql> SELECT memc_set('mykey', 'Getting this with SELECT means all works well');
+--------------------------------------------------------------------+
| memc_set('mykey', 'Getting this with SELECT means all works well') |
+--------------------------------------------------------------------+
| 0 |
+--------------------------------------------------------------------+
1 row in set (0.03 sec)

mysql> SELECT memc_get('mykey');
+-----------------------------------------------+
| memc_get('mykey') |
+-----------------------------------------------+
| Getting this with SELECT means all works well |
+-----------------------------------------------+
1 row in set (0.00 sec)

Tuesday 1 December 2009

replication monitoring script

This script monitors mysqld and replication status. If mysql is not running or replication slave is not running then it will log messages into log file. you can customize this script to send an alert via email etc.

##########


HOST=`hostname -s`

USR=root

PASS=xxxxx

MYSQL="/usr/bin/mysql -h$HOST -u$USR -p$PASS"

TMP=/tmp/mysql.log

LOG=/var/logs/monitoring.txt

MYSQLADMIN="/usr/bin/mysqladmin -h$HOST -u$USR -p$PASS"

MAXBEHIND=7200 # [Critical] Two hours behind

MINBEHIND=3600 # [Warning] One hour behind

# Check if mysqld is alive and accepting connections

IS_ALIVE=`$MYSQLADMIN ping | grep -c 'alive'`

if [ "$IS_ALIVE" != "1" ]; then

ERRMSG="[Critical] MySQL Down. `date`"

echo $ERRMSG >> $LOG

exit

fi

# Replication Monitoring

$MYSQL -e"show slave status\G" > $TMP

SB=`grep Seconds_Behind_Master $TMP | cut -f2 -d: | tr -d " "`

SBNUM=$SB

if [ "$SB" = "NULL" ]; then

ERRMSG="[Critical] Replication Down. `date`"

echo $ERRMSG >> $LOG

$MYSQL -e"show slave status\G" >> $LOG

elif [ $SBNUM -ge $MAXBEHIND ]; then

ERRMSG="[Critical] Replication behind ${SBNUM}s. `date`"

echo $ERRMSG >> $LOG

elif [ $SBNUM -ge $MINBEHIND ]; then

ERRMSG="[Warning] Replication behind ${SBNUM}s. `date`"

echo $ERRMSG >> $LOG

fi

########################

Check log file for warnings


shell# tail -f /var/logs/monitoring.txt

Tuesday 18 August 2009

MySQL InnoDB freeze/lock-up under insert load

It's been up for a few days and runs fast/well, but twice now it has suddenly locked up with NO activity at all - all of the Innodb thread slots have tasks which seem stuck, and hundreds of other threads waiting - this goes on forever, like 1-2 hours at CPU 100% idle, no queries executing, etc. until we discovered FLUSH TABLES will fix it.
http://forums.mysql.com/read.php?22,273419,274069#msg-274069

Reason: You may need to increase the value of the table_cache variable. You can check whether you need to increase the table cache by checking the Opened_tables status variable. If the value is very large or increases rapidly, even when you have not issued many FLUSH TABLES statements, you should increase the table cache size

Solution: increase table_cache

InnoDB freeze/lock-up

It's been up for a few days and runs fast/well, but twice now it has suddenly >locked up with NO activity at all - all of the Innodb thread slots have tasks which seem stuck, and hundreds of other threads waiting - this goes on forever, like 1-2 hours at CPU 100% idle, no queries executing, etc. until we discovered FLUSH TABLES will fix it।


रासों :

Wednesday 1 July 2009

Manually DB Creation with 11G

1) Set your ORACLE_SID

shell>export ORACLE_SID=test

2) Create pfile

shell> vi $ORACLE_HOME/dbs/inittest.ora


control_files = (/u01/app/oracle/product/11.1.0/db_1/dbs/control_test_1.ctl,/u01/app/oracle/product/11.1.0/db_1/dbs/control_test_2.ctl,/u01/app/oracle/product/11.1.0/db_1/dbs/control_test_3.ctl)
undo_management = AUTO
undo_tablespace = ts_undo
db_name = test
db_block_size = 8192
sga_max_size = 524288000 # 500M
sga_target = 524288000 #500M


3) Create a password file

#$ORACLE_HOME/bin/orapwd file=$ORACLE_HOME/dbs/pwdtest.ora password=oracle entries=5


4) Start the instance



sqlplus / as sysdba

startup nomount

5) Create the database

create database test
user sys identified by gd123
user system identified by gd123
controlfile reuse
maxdatafiles 22
maxinstances 2
character set we8iso8859p1
national character set utf8
--set default bigfile tablespace
-- database logging clauses
logfile
group 1 ('/u01/app/oracle/oradata/test/redo_test_1a.log', '/u01/app/oracle/oradata/test/redo_test_2a.log') size 4M,
group 2 ('/u01/app/oracle/oradata/test/redo_test_1b.log', '/u01/app/oracle/oradata/test/redo_test_2b.log') size 4M
--maxlogfiles 3
--maxlogmembers 7
maxloghistory 1000
archivelog
force logging
-- Tablespace Clauses
extent management local
datafile '/u01/app/oracle/oradata/test/system.dbf' size 300M autoextend ON
sysaux datafile '/u01/app/oracle/oradata/test/sysaux.dbf' size 100M autoextend on
default tablespace ts_test_data datafile '/u01/app/oracle/oradata/test/data.dbf' size 1M autoextend on next 128K maxsize 2M extent management local
default temporary tablespace ts_test_temp tempfile '/u01/app/oracle/oradata/test/temp.dbf' size 2M extent management local
undo tablespace ts_undo datafile '/u01/app/oracle/oradata/test/undo.dbf' size 20M
--
set time_zone = '-01:00';


6) Run catalog and catproc

@?/rdbms/admin/catalog.sql

@?/rdbms/admin/catproc.sql

Monday 8 June 2009

MySQL Cluster - Windows/Linux Platform (Mixed setup)

In this article I would install Data nodes on mix platforms i.e. one data node on linux (Centos5) and second node on windows xp 2003.

For this setup I have used three Servers

1. 192.168.35.22 ( Solaris 10, for running Management Node)
2. 192.168.35.78 ( Data Node, Centos 5)
3. 192.168.35.74 ( Data Node, Windows XP 2003)


Steps to install Data Node onto Windows XP ( 192.168.35.74)

1) Get the latest mysql cluster binaries for windows.
http://dev.mysql.com/downloads/cluster/7.0.html#win32
2) Run Windows MSI Installer and follow the steps but don't configure/start mysql instance.
3) MSI Installer will by default install mysql binaries onto following path i.e. C:\Program Files\MySQL\MySQL Server 7.0

2) Manually create following path ( directory structure onto C: Drive)
/var/lib/mysql-cluster

3) Edit my.ini ( C:\Program Files\MySQL\MySQL Server 7.0/my.ini) and paste following contents

[mysqld]
ndbcluster
ndb-connectstring=192.168.35.22
[mysql_cluster]
ndb-connectstring=192.168.35.22

Steps to setup Data Node onto Centos 5 ( 192.168.35.78)

1) Download RPMs for Centos 5 ( I have used Red Hat Enterprise Linux 5 RPM)
http://dev.mysql.com/downloads/cluster/7.0.html#Red_Hat_Enterprise_Linux_5_RPM_(x86)

2) You should download following RPMs

  • MySQL-Cluster-gpl-client-7.0.5-0.rhel5.i386.rpm
  • MySQL-Cluster-gpl-devel-7.0.5-0.rhel5.i386.rpm
  • MySQL-Cluster-gpl-management-7.0.5-0.rhel5.i386.rpm
  • MySQL-Cluster-gpl-server-7.0.5-0.rhel5.i386.rpm
  • MySQL-Cluster-gpl-shared-7.0.5-0.rhel5.i386.rpm
  • MySQL-Cluster-gpl-storage-7.0.5-0.rhel5.i386.rpm
  • MySQL-Cluster-gpl-tools-7.0.5-0.rhel5.i386.rpm

3) RPM installation
rpm -Uvh MySQL-Cluster-gpl-client-7.0.5-0.rhel5.i386.rpm MySQL-Cluster-gpl-devel-7.0.5-0.rhel5.i386.rpm MySQL-Cluster-gpl-management-7.0.5-0.rhel5.i386.rpm MySQL-Cluster-gpl-server-7.0.5-0.rhel5.i386.rpm MySQL-Cluster-gpl-shared-7.0.5-0.rhel5.i386.rpm MySQL-Cluster-gpl-storage-7.0.5-0.rhel5.i386.rpm MySQL-Cluster-gpl-tools-7.0.5-0.rhel5.i386.rpm

4. Open my.cnf file using your favorite editor e.g. vi /etc/my.cnf and paste following lines
[mysqld]
ndbcluster
ndb-connectstring=192.168.35.22
[mysql_cluster]
ndb-connectstring=192.168.35.22

Steps to install Managment Node

1) Download binaries for Solaris
http://dev.mysql.com/downloads/cluster/7.0.html#Solaris

2) uncompress the file i.e. gzip mysql-cluster-gpl-7.0.5-solaris10-i386.pkg.gz

3) install package i.e. pkgadd -d mysql-cluster-gpl-7.0.5-solaris10-i386.pkg

4) create following directory structure
/var/lib/mysql-cluster

5) setup configure.ini file for managment server and paste following contents into it
e.g.
vi /etc/configure.ini
[NDBD DEFAULT]
NoOfReplicas=2

[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]
# Managment Server
[NDB_MGMD]
HostName=192.168.35.22 # the IP of Managment Server
# Storage Engines
[NDBD]
HostName=192.168.35.78 # the IP of the Windows XP running Data Node
DataDir= /var/lib/mysql-cluster
[NDBD]
HostName=192.168.35.74 # the IP of the Centos 5 running Data Node
DataDir=/var/lib/mysql-cluster
# 2 MySQL Clients
# I personally leave this blank to allow rapid changes of the mysql clients;
# you can enter the hostnames of the above two servers here. I suggest you dont.
[MYSQLD]
[MYSQLD]



Steps to start mysql cluster (192.168.35.22)


Start Management Node

1. Firstly we shall start managment server
ndb_mgmd -f /etc/configure.ini

Start Data Nodes

2. Go to Centos 5 ( 192.168.35.78) and start data node
ndbd
3. Go to Windows XP ( 192.168.35.74) and start data node
locate ndbd.exe file ( C:\Program Files\MySQL\MySQL Server 7.0\bin) and double click ndbd.exe

Check its working


You can now return to the managment server (192.168.35.22) and enter the managment console:

/opt/mysql/mysql-cluster/bin/ndb_mgm

Enter the command SHOW to see what is going on. A sample output looks like this:

-bash-3.00# ./ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @192.168.35.78 (mysql-5.1.32 ndb-7.0.5, Nodegroup: 0, Master)
id=3 @192.168.35.74 (mysql-5.1.32 ndb-7.0.5, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.35.22 (mysql-5.1.32 ndb-7.0.5)

[mysqld(API)] 2 node(s)
id=4 (not connected, accepting connect from any host)
id=5 (not connected, accepting connect from any host)

If you see

not connected, accepting connect from 192.168.35.[22/78/74]

in the first or last two lines they you have a problem. Please email me with as much detail as you can give and I can try to find out where you have gone wrong and change this HOWTO to fix it

Tuesday 7 April 2009

Centos 5.3 vs Sun Solaris 10- innodb performance

This article compares the performance characteristics of mysql 5.133 operating on Solaris 10 verses Centos 5.3.
The Test result shows that innodb performance on Centos 5.3 is much better than on Sun Solaris 10.

Lab Setup-

The list of hardware used is as follows:

vendor_id : AuthenticAMD

cpu family : 15

model : 107

model name : AMD Athlon(tm) 64 X2 Dual Core Processor 4000+

stepping : 1

cpu MHz : 2100.000

cache size : 512 KB

RAM : 1.5G


The list of software used is as follows:

* CentOS 5.3
* Solaris 5.10
* Mysql 5.1.33
* vmstat

same my.cnf used on both Centos and Solaris.
######################################
[mysqld]
# InnoDB tables
innodb_file_per_table
innodb_buffer_pool_size = 512M
innodb_additional_mem_pool_size = 20M

innodb_log_file_size =128M
######################################

The Server was first loaded with the CentOS operating system and mysql 5.1.33 was installed ( i.e. RPM installtion). Total 23 million rows were inserted into innodb tables using mysqlimport command.

vmstat stats-
The vmstat shows that OS was writing at more than 14MB/s, but no I/O wait is reported,the CPU was at 49% running non-kernel code.

test results on Sun Soloaris 10 continue..

Wednesday 1 April 2009

FEDERATED Storage Engine

I have observed that DML run faster on FEDERATED tables compared to SELECT statements.
FEDERATED isn't very clever when it comes to SELECT, it does not, for example, honor LIMIT clauses but instead of doing it on the remote server, does the LIMIT processing locally. There are other limitations to the protocol that FEDERATED uses that makes it sometimes underperform for SELECT queries.
This is not the case with DML operations as they are handled 100% on the remote server
Strangely not all type of select queries take long except where I use count() function.
Actually FEDERATED move all data to the client and then perform the count() in this case, and it would explain the lack of speed.

Mysqldumpslow -How to

Note: In case if you haven't enabled slow log and are using 5.0.x then add following line into my.cnf under [mysqld] and restart mysql server.
####### 5.0.x
log-slow-queries
long_query_time=5 # capture slowest slow queries first
######

# if you have installed 5.1.x then follow following steps
mysql> set global slow_query_log=1;
Query OK, 0 rows affected (0.02 sec)
mysql> set global long_query_time=5;
Query OK, 0 rows affected (0.00 sec)

You need to have perl installed on your system in order to run mysqldumpslow. If you have perl installed but in different location from /usr/local/bin/perl, you can either edit the mysqldumpslow
script to point to the right location of the perl binary or you can create symbolic link /usr/local /bin/perl that points to the real location of the perl binary
e.g.
mysqldumpslow -s c -t 10 /path-to-slowlog/slowlog.log

If the log file is big, mysqldumpslow might put some load on your cpu so I would suggest that you copy the log file on some other box and run mysqldumpslow there, just to avoid cpu load on your server.

In order to parse only 200 last lines of the file, you will have to do something like this:

tail --lines=200 /data/data/eus3300005-slow.log | mysqldumpslow -t20 -

(tail --lines=200 will get only last 200 lines and the last "-" parameter
for mysqldumpslow tell's it to read the log from stdin)

For solaris 10, please use:

tail -200 /data/data/eus3300005-slow.log | mysqldumpslow -t20 -

Transaction isolation levels

The four levels, in ascending order of strictness, are:

* READ UNCOMMITTED:
* READ COMMITTED:
* REPEATABLE READ:
* SERIALIZABLE:

There is a big difference between READ COMMITTED and REPEATABLE READ. Basically READ COMMITED means that within one transaction, you will be able to see *commited* changes by another transaction. So within the same transaction you can get different values for the same query. For example, we have transaction T1 which reads a row:

BEGIN
SELECT row FROM t WHERE id=1; ---> this returns row="my row"

Now transaction T2 does an update and commits it:

BEGIN
UPDATE t SET row="updated row" WHERE id=1;
COMMIT

Back to T1, if we issue the query again we would get:

SELECT row FROM t WHERE id=1; ---> this returns row="updated row"

So we have different values of "row" within the same transaction.


The case is different in a repeatable read. As the name suggests, every read you do will be "repeated". So in the T1 transaction above, the SELECT statement will always return row="my row" even if other transactions (T2) change the value of row.
With REPEATABLE READ, your T1 transaction is guaranteed to read the same value every time it does an identical SELECT (even if another T2 transaction changes those SELECTED values in the meantime).
This will not happen in READ COMMITTED. Within the T1 transaction it is possible that the same SELECT will give different results (because T2 changed them in the meantime), ie, the isolation level is lower.

I'll discuss the remaining two in my next post...

Concurrency, transactions and isolation levels

Concurrency, transactions and isolation levels are a complicated matter so I understand it's initially difficult to grasp the concepts.

Choosing an isolation level is basically a trade off between the amount of concurency and obtaining "the correct and expected" data. Higher isolation levels (like SERIALIZABLE) lower the amount of concurrency because one transaction will depend on the outcome of the other and therefore can not run at the same time (more locking is needed). However, this insures that results are "more correct" as other transactions can't modify it.

It's difficult to explain these concepts so let me try it with an example. Say you are at an ATM machine and want to withdraw some money. You ask the machine how much money you have and it says $500. So you say, ok, continue and withdrawn $500 dollars. The ATM will do:

withdraw($500) if balance >= $500

Just before you ask the ATM to withdrawn (but after you've seen your balance), your telephone company tries to withdraw $100 to pay for your phone bill.

With READ COMMITTED, the ATM will read your balance, which is now $500-$100=$400 and refuse to give you the $500 (it read the committed value and now knows you only have $400), even though it just told you you had $500.

With REPEATABLE READ, the ATM will read your balance, but it is guarantee to read the same value, i.e., $500 and will give you the $500 dollars. You may ask, what happens to the $100 withdraw from the phone company? It depends on implementation, but usually the phone bill transaction will try to do the update, find a lock and wait until the ATM has finished. That is why higher isolation levels have lower concurrency but more "accurate" data, as them make transactions more serializable.

Tuesday 31 March 2009

Obtain reads/writes information for the non-default key caches

While there isn't a simple SQL command to pass to the mysql client and obtain this information, you can use the following command to write the information to the mysql error log:

shell> mysqladmin debug

Everytime you execute this command a new entry will be added to the log file that includes information on the individual key caches. Here is a example of how to create a indepedent cache and the output of the debugging entry:

mysql> SET GLOBAL tcache.key_buffer_size=2048*1024;
Query OK, 0 rows affected (0.00 sec)

mysql> CACHE INDEX t IN tcache;
+--------+--------------------+----------+----------+
| Table | Op | Msg_type | Msg_text |
+--------+--------------------+----------+----------+
| test.t | assign_to_keycache | status | OK |
+--------+--------------------+----------+----------+
1 row in set (0.01 sec)


======= error log =============
Key caches:
default
Buffer_size: 8384512
Block_size: 1024
Division_limit: 100
Age_limit: 300
blocks used: 3
not flushed: 0
w_requests: 5
writes: 1
r_requests: 15
reads: 3

tcache
Buffer_size: 2097152
Block_size: 1024
Division_limit: 100
Age_limit: 300
blocks used: 0
not flushed: 0
w_requests: 0
writes: 0
r_requests: 0
reads: 0

===============================

Now if we insert some entries into the table and read then and execute "mysqladmin debug" again, we can see the updated stats:


======= error log =============
mysql> INSERT INTO t (id) VALUES (0),(1),(2),(3),(4);
Query OK, 5 rows affected (0.01 sec)
Records: 5 Duplicates: 0 Warnings: 0
Key caches:
default
Buffer_size: 8384512
Block_size: 1024
Division_limit: 100
Age_limit: 300
blocks used: 3
not flushed: 0
w_requests: 5
writes: 1
r_requests: 15
reads: 3

tcache
Buffer_size: 2097152
Block_size: 1024
Division_limit: 100
Age_limit: 300
blocks used: 1
not flushed: 0
w_requests: 5
writes: 1
r_requests: 4
reads: 0
===============================

ZFS Replication: HA backup

A small introduction of ZFS - ZFS is a transactional file system, which means that the file system state is always consistent on disk. With a transactional file system, data is managed using copy on write semantics. Data is never overwritten, and any sequence of operations is either entirely committed or entirely ignored. In addition, synchronous data (written using the O_DSYNC flag) is always guaranteed to be written before returning, so it is never lost.
http://docs.sun.com/app/docs/doc/819-5461/zfsover-2?a=view

Steps taken to build the ZFS replication:

1. Create a ZFS pool to store the data directory in. The command here will vary, there are several methods to create a pool. You can use a file, partition, slice or whole disk. The only resource I have available was a file. I created the file first:


shell>mkfile 1G /export/home/data


2. Then create a ZFS pool from the file:



shell>zpool create datadir /export/home/data


Now there is a folder in the root named 'datadir'. You can create from a partition, slice or whole drive in a similar manner by indicating the drive name as it is listed in /dev/dsk:


3. Once the 'datadir' pool is created we can now copy our data directory to the datadir pool. Shutdown the MySQL server and check the error log to ensure a clean shutdown

4. Add a datadir entry to you my.cnf file that points to the new folder:



in my.cnf

[mysqld]

datadir = /datadir

Now we have our data directory inside a ZFS pool. Now we need to send it to another pool on another machine likely for HA backup. Use steps 1-3 to create another pool on another machine. For example we will have 'SlaveA' machine with a pool named 'slavepool'.


5. We will need to snaphot the data directory in zfs to be able to send it to SlaveA.



shell >zfs snapshot datadir@snap1



This creates snapshot 'snap1' of the datadir pool.



You can verify the snapshots with the list command: 'zfs list -t snapshot'

6. Next we will need to send the snapshot to the slaveA server and apply it to the 'slavepool' pool:



shell >zfs send datadir@snap1 |ssh user_name@SlaveA pfexec zfs recv -F slavepool



This will 'send' the datadir snapshot 'datadir@snap1' over ssh to SlaveA server which then uses the stream of data to 'recv' receive it into the 'slavepool' pool.



7. At this point we now have a snapshot of the data directory in the slavepool on SlaveA. To allow us to write updates to it we need to make the slavepool readonly so no data or metadata can be changed by the OS.



On SlaveA server:



shell >zfs set readonly=on slavepool


8. Start the MySQL server back up and verify that it is using the new data directory by checking the global variables:



mysql>SHOW GLOBAL VARIABLES LIKE '%datadir%';



+---------------+------------------------+

| Variable_name | Value |

+---------------+------------------------+

| datadir | /datadir/ |

+---------------+------------------------+

At this point you will need to create a shell script that will snapshot the datadir and send the incremental snapshots to the slave.



Our original snapshot is datadir@snap1. Now we take another snapshot and call it datadir@snap2. We can then apply the changes in the file system to the SlaveA by sending the incremental changes to its slavepool:



Take the snapshot:



shell >zfs snapshot datadir@snap2



Then send the incremental (-i) change to the slave:



shell >zfs send -i datadir@snap1 datadir@snap2 |ssh user_name@SlaveA pfexec zfs recv slavepool


The above to steps will need to be handled in a scheduled script to update the SlaveA server so that its snapshot is current. Here you will need to test frequency to determine what is a good balance of load and staying current with your snapshots.







If you need to switch to the Slave server remember you will need to disable the read only we set earlier:



$>zfs set readonly=off slavepool


Then you can start your SlaveA MySQL server. It shoud go through crash recovery on the InnoDB tablespace. You may need to perform a repiar table on MyISAM tables. It is best you use an automatic crash recovery engine like InnoDB for this configuration.

Monday 30 March 2009

Solaris- InnoDB Plugin From Source

Since there isn't any binary innodb plugin available for Solaris10. So, I decided to share my experience of building innodb plugin from source on Solaris 10.


Assumption

1. Readers are assumed to have at least some basic Unix/Linux administrative skills.
2. Readers are assumed to have running copy of Solaris 10.
3. Readers are assumed to have backed up any critical data before the installation.

1. 1. Get the latest innodb plugin
http://www.innodb.com/innodb_plugin/download/

2. Get the latest MySQL 5.1 sources from
http://dev.mysql.com/downloads/mysql/5.1.html#source

3. Replace the contents of the mysql-5.1.N/storage/innobase/ directory
with the contents of this directory.
shell> cd mysql-5.1.N/storage
shell> mv innobase innobase-default
shell> tar xzf /path/to/innodb_plugin-1.0.N.tar.gz
shell> mv innodb_plugin-1.0.N innobase

4. Compile MySQL using the following configure line.
Note: * This uses Sun Studio Express (not gcc)
* --prefix=/opt/mysql/5.1.N is used, but you can change the
directories if you like. Also --localstatedir and --libexecdir
* Maybe it's good to change the --with-comment and
--with-server-suffix to say that this is your build using
the InnoDB Plugin.

5. Add the flags

CC=/path/to/SunStudioExpress/bin/cc

CXX=/path/to/SunStudioExpress/bin/CC

CFLAGS="-g -xO2 -mt -fsimple=1 -ftrap=%none -nofstore -xbuiltin=%all -xlibmil -xlibmopt -xtarget=generic"

CXXFLAGS="-g -xO2 -mt -fsimple=1 -ftrap=%none -nofstore -xbuiltin=%all -features=no%except -xlibmil -xlibmopt -xtarget=generic"

6. export CC CXX CFLAGS CXXFLAGS

7. Optionally
make clean

8.

./configure --prefix=$PREFIX --localstatedir=$PREFIX/data \ --libexecdir=$PREFIX/bin \ --with-comment="MySQL Server with InnoDB Plugin" \ --with-server-suffix="-innodb-plugin" \ --enable-thread-safe-client --enable-local-infile \ --with-mysqld-libs=-lmtmalloc \ --with-pic --with-client-ldflags="-static" \ --with-mysqld-ldflags="-static" --with-zlib-dir=bundled \ --with-big-tables --with-ssl \ --with-plugins=partition,archive,blackhole,csv,federated,heap,innobase,myisam \ --with-extra-charsets=complex

9. Make

10. Make install.
On MySQL DBA blog, I put my thoughts and experiences as a Senior DBA. My responsibility is managing one of the largest and most active MySQL on Linux installations. Topics include MySQL scalability, Myisam/InnoDB/MySQL performance tuning, disaster recovery, database security, replication, and general information about database management. E-mail me at akhangd at hotmail.co dot uk for my resume.