Thursday, 25 November 2010

Query Tuning (Query optimization) Part-1

In this artical I'll try to explain query tuning techniques, to start with we'll need to capture slow queries, that is, log queries executing longer than long_query_time server variable (in seconds but supports microseconds when logging to file). The slow query log help identify candidates for query optimization.

Assumptions:

1) The reader has basic MySQL/Unix skills.


1) Enable slow query log:
Add following lines into MySQL option file (i.e. /etc/my.cnf) under [mysqld] section

log-slow-queries=mysql_slow.log
long-query-time=2

Note: If you specify no name for the slow query log file, the default name is host_name-slow.log.

And restart MySQL server when it's safe to do so e.g. /etc/init.d/mysql restart

For further details on slow query log, please visit here:
# if using 5.1
http://dev.mysql.com/doc/refman/5.1/en/slow-query-log.html
# if using 5.0
http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html

If you are using MySQL 5.1, you can enable slow log this way:

mysql> set global long_query_time = 2;
mysql> set global slow_query_log = 1;
mysql> set global slow_query_log_file = 'mysql_slow.log‘;

2) Process Slow Query Log:
You can process the whole slow log file and find most frequent slow queries using mysqldumpslow utility

# Find top 10 slowest queries

$ mysqldumpslow -s t -n 10 /path/to/mysql_slow.log >mysql_slow.log.c
logReading mysql slow query log from /path/to/mysql_slow.log

Count: 1 Time=1148.99s (1148s) Lock=0.00s (0s) Rows=0.0 (0),
insert into a select * from b
Count: 37 Time=2.28s (84s) Lock=0.11s (4s) Rows=0.0 (0),
Update a set CONTENT_BINARY = 'S' where ID = 3874
Count: 1 Time=29.31s (29s) Lock=0.00s (0s) Rows=0.0 (0), select max(LOCK_VERSION) from b
...

For further details on using mysqldumpslow please visit here:
http://mysqlopt.blogspot.com/search?q=mysqldumpslow
http://dev.mysql.com/doc/refman/5.1/en/mysqldumpslow.html

3) Using EXPLAIN to Analyze slow queries:
this will provide us find
a) Whether optimizer is using existing idnexes
b) Can help qualify query rewrites
c) Points out need to index

4) Main query performance issues:
a) Full table scans
b) Temporary tables
c) Filesort

5) Let's start with basics, that is, 'Full table scans' situations

mysql> EXPLAIN select * from tblmeshort where timestamp between '2010-08-17 00:00:00' and '2010-08-17 01:20:00'\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: tblmeshort
type: ALL
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
rows: 400336
Extra: Using where
1 row in set (0.00 sec)

The EXPLAIN ouput suggests that optimizer will do 'FULL TABLE SCAN', as indicated by type: ALL.

The solution is to add an index on `timestamp` column, as we are filtering rows using this column.

mysql> alter table tblmeshort add index (`timestamp`);
Query OK, 0 rows affected (1.84 sec)
Records: 0 Duplicates: 0 Warnings: 0

Let's re-run EXPLAIN on the same query

mysql> EXPLAIN select * from tblmeshort where timestamp between '2010-08-17 00:00:00' and '2010-08-17 01:20:00'\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: tblmetricshort
type: range
possible_keys: timestamp
key: timestamp
key_len: 4
ref: NULL
rows: 1
Extra: Using where
1 row in set (0.00 sec)

Now let's remove this index and look at another example:

mysql> alter table tblmehort drop index `timestamp`;
Query OK, 0 rows affected (0.03 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> EXPLAIN SELECT * FROM tblmeshort
INNER JOIN tblMetric
ON (tblmehort.fkMetric_ID=tblMetric.pkMetric_ID)
WHERE timestamp between '2010-08-17 00:00:00' and '2010-08-17 01:20:00'\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: tblmeshort
type: ALL
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
rows: 400336
Extra: Using where
*************************** 2. row ***************************
id: 1
select_type: SIMPLE
table: tblMetric
type: eq_ref
possible_keys: PRIMARY
key: PRIMARY
key_len: 3
ref: metrics.tblmehort.fkMetric_ID
rows: 1
Extra: Using where
2 rows in set (0.00 sec)

In the above example, the optimizer will perform full table scan on tblmeshort first and when MySQL goes looking for rows in tblMetric, instead of table scanning like it did before, it will use the value of fkMetric_ID with the 'PRIMARY KEY' of tblMetric table to directly fetch matching rows from tblMetric. Thus this SQL is partially optimized, that is, it does scan all rows of tblmeshort but it uses index to join tables.

The solution is to add an index on `timestamp` column and re-run EXPLAIN with same query

mysql> EXPLAIN SELECT * FROM tblmetricshort INNER JOIN tblMetric ON (tblmetricshort.fkMetric_ID=tblMetric.pkMetric_ID) WHERE timestamp between '2010-08-17 00:00:00' and '2010-08-17 01:20:00'\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: tblmetricshort
type: range
possible_keys: timestamp
key: timestamp
key_len: 4
ref: NULL
rows: 1
Extra: Using where
*************************** 2. row ***************************
id: 1
select_type: SIMPLE
table: tblMetric
type: eq_ref
possible_keys: PRIMARY
key: PRIMARY
key_len: 3
ref: metrics.tblmetricshort.fkMetric_ID
rows: 1
Extra: Using where
2 rows in set (0.00 sec)


Continue....


Tuesday, 9 November 2010

Short index length (good or bad)

If you find the work load on DB server is disk bound and you do not have enough memory to increase innodb buffer pool size. We can help improve the performance by reducing the length of indexes, this will result more indexes fit into memory and would increase write operations. It may impact SELECT queries but not necessarly for bad. Allow me to show some example. I will lower the index length of my test table from 40 char to just 8 char:
Note: You may consider trying different lengths for your index and check which best fit your workload (both writing and reading speed).

ALTER TABLE csc52021 DROP INDEX value_40, ADD INDEX value_8(value(8));
To look all the rows that have a value between 'aaa' and 'b':
mysql> EXPLAIN SELECT COUNT(*) FROM csc52021 WHERE value>'aaa%' AND
mysql> <'b';
+----+-------------+----------+-------+---------------+---------+---------+------+------+-------------+
| id | select_type | table    | type  | possible_keys | key     | key_len | ref  | rows | Extra       |
+----+-------------+----------+-------+---------------+---------+---------+------+------+-------------+
| 1  | SIMPLE      | csc52021 | range | value_8   | value_8     | 11      | NULL | 9996 | Using where |
+----+-------------+----------+-------+---------------+---------+---------+------+------+-------------+
1 row in set (0.00 sec)
mysql> SELECT COUNT(*) FROM csc52021 WHERE value&gt;'aaa%' AND value&lt;'b';
+----------+
| COUNT(*) |
+----------+
| 5533     |
+----------+
1 row in set (0.02 sec)
As you can see, the index is used.
What if look for a specific value?
mysql> EXPLAIN SELECT COUNT(*) FROM csc52021 WHERE
mysql> value='a816d6ce93c2aa992829f7d0d9357db896d5e7de';
+----+-------------+----------+------+---------------+---------+---------+-------+------+-------------+
| id | select_type | table    | type | possible_keys | key     | key_len | ref   | rows | Extra       |
+----+-------------+----------+------+---------------+---------+---------+-------+------+-------------+
| 1  | SIMPLE      | csc52021 | ref  | value_8       | value_8 | 11      | const | 1    | Using where |
+----+-------------+----------+------+---------------+---------+---------+-------+------+-------------+
1 row in set (0.00 sec)

mysql> SELECT COUNT(*) FROM csc52021 WHERE
mysql> value='a816d6ce93c2aa992829f7d0d9357db896d5e7de';
+----------+
| COUNT(*) |
+----------+
| 1        |
+----------+
1 row in set (0.00 sec)

SELECT statements does a lookup for a value that have very low cardinality (for example, if we had millions of rows that starts with the same first 16 char "a816d6ce93c2aa99"), then a short index is not very efficient.
On the other hand, a short index length allows to fit more indexes value in memory, increasing the lookup speed.

In my test table, I can decrease the index length to 4 and still have good performance. For example:

mysql> ALTER TABLE csc52021 DROP INDEX value_8, ADD INDEX
mysql> value_4(value(4));

mysql> EXPLAIN SELECT COUNT(*) FROM csc52021 WHERE
mysql> value='a816d6ce93c2aa992829f7d0d9357db896d5e7de';
+----+-------------+----------+------+---------------+---------+---------+-------+------+-------------+
| id | select_type | table    | type | possible_keys | key     | key_len | ref   | rows | Extra |
+----+-------------+----------+------+---------------+---------+---------+-------+------+-------------+
| 1  | SIMPLE      | csc52021 | ref  | value_4       | value_4 | 7       | const | 2    | Using where |
+----+-------------+----------+------+---------------+---------+---------+-------+------+-------------+
1 row in set (0.00 sec)

mysql&gt; EXPLAIN SELECT COUNT(*) FROM csc52021 WHERE value LIKE 'a816%';
+----+-------------+----------+-------+---------------+---------+---------+------+------+-------------+
| id | select_type | table    | type  | possible_keys | key     | key_len | ref  | rows | Extra       |
+----+-------------+----------+-------+---------------+---------+---------+------+------+-------------+
| 1  | SIMPLE      | csc52021 | range | value_4       | value_4 | 7       | NULL | 2    | Using where |
+----+-------------+----------+-------+---------------+---------+---------+------+------+-------------+
1 row in set (0.00 sec)

mysql&gt; SELECT COUNT(*) FROM csc52021 WHERE value LIKE 'a816%';
+----------+
| COUNT(*) |
+----------+
| 2        |
+----------+
1 row in set (0.00 sec)

To summarize:
if you have good cardinality, a shorter index length will increase both writes and reads operations, especially if you are IO bound.

Thursday, 29 July 2010

Backup mysql binary logs



It is always good to make a backup of all the log files you are about to delete. Alternatively if you take incremental backups then you should rotate the binary log by using FLUSH LOGS. This done, you need to copy to the backup location all binary logs which range from the one of the moment of the last full or incremental backup to the last but one. These binary logs are the incremental backup

Here is the bash script, which you can use to backup binary logs, all you need to do is change following param according to your needs and all yours. This script is not mine, I got the idea from here:


#
# This script backup binary log files
#

backup_user=dba
backup_password=xxxx
backup_port=3306
backup_host=localhost 
log_file=/var/log/binlog_backup.log
binlog_dir=/mnt/database/logs # Path to binlog
backup_dir=/mnt/archive/binlogs/tench # Path to Backup directory

PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin
export PATH

Log()
{
echo "`date` : $*" >> $log_file
}

mysql_options()
{
common_opts="--user=$backup_user --password=$backup_password"
if [ "$backup_host" != "localhost" ]; then
common_opts="$common_opts --host=$backup_host --port=$backup_port"
fi
}

mysql_command()
{
mysql $common_opts --batch --skip-column-names  -e "$2"
}

mysql_options

Log "[INIT] Starting MySQL binlog backup"

Log "Flushing MySQL binary logs (FLUSH LOGS)"

mysql_command mysql "flush logs"

master_binlog=`mysql_command mysql "show master status" 2>/dev/null | cut -f1`

Log "Current binary log is: $master_binlog"

copy_status=0

for b in `mysql_command mysql "show master logs" | cut -f1`
do
if [ -z $first_log ]; then
first_log=$b
fi
if [ $b != $master_binlog ]; then
Log "Copying binary log ${b} to ${backup_dir}"
rsync -av $backup_host:/$binlog_dir/$b $backup_dir >& /dev/null
if [ $? -ne 0 ]; then
copy_status=1
break
fi
else
break
fi
done

if [ $copy_status -eq 1 ]; then
Log "[ERR] Failed to copy binary logs cleanly...aborting"
exit 1
fi

Wednesday, 28 July 2010

NOW() function is not replication-safe

It says on mysql doc that NOW () function is replication-safe and they have given an example too to prove this fact that you will obtain the same result on slave as on the master.
http://dev.mysql.com/doc/refman/5.1/en/replication-features-functions.html

I'll try to prove it with an example that NOW()function is not replication-safe. Suppose Master is located in 'New york' and Slave is in 'London' and both servers are using local time. On Master you create table and insert a row as well as you explicitly set session time zone to 'SYSTEM'.

Note: Both Master/Slave are using identical version of MySQL, i.e. 5.1.47

mysql> CREATE TABLE test_now_func (mycol DATETIME);
Query OK, 0 rows affected (0.06 sec)

mysql> INSERT INTO test_now_func VALUES ( NOW() );
Query OK, 1 row affected (0.00 sec)

mysql> SET TIME_ZONE=’SYSTEM’;

mysql> SELECT * FROM test_now_func;
+---------------------+
| mycol |
+---------------------+
| 2009-09-01 12:00:00 |
+---------------------+
1 row in set (0.00 sec)

However if you do a SELECT on slaves copy you will see different result:
mysql> SELECT * FROM test_now_func;
+---------------------+
| mycol |
+---------------------+
| 2009-09-01 17:00:00 |
+---------------------+
1 row in set (0.00 sec)

The correct solution is recorded some where else, that is, the same system time zone should be set for both master and slave.
http://dev.mysql.com/doc/refman/5.1/en/replication-features-timezone.html

For example

[mysqld]
..
timezone=’America/New_York’

Friday, 23 July 2010

Key cache hit ratio

When we talk about key cache performance we usually look at these two status variables.

1. Key_read_requests
The number of requests to read a key block from the cache.

2. Key_reads
The number of physical reads of a key block from disk.

There is a good reason to examine Key_reads, because we know that disks are very slow relative to RAM but the Key_reads aren't always physical disk reads at all. If the requested block of data isn't in the operating system's cache, then a Key_read is a disk read, you will be lucky if it is cached, then it's just a system call. Having said that it is always good to minimize key_reads which may cause randon disk I/O.

The optimum solution is to keep the ratio Key_reads : Key_read_requests should be 1:100 and Key_writes / Key_write_requests should always be less than 1.

Finally I'll like to show you something partially useful you can do with Key_reads:

[xxxxxx ~]$ mysqladmin -uroot -p ext -ri10 | grep Key_reads
Enter password:
| Key_reads | 44605148 |
| Key_reads | 4 |
| Key_reads | 4 |
| Key_reads | 13 |
| Key_reads | 9 |
| Key_reads | 6 |
| Key_reads | 20 |
| Key_reads | 6 |
| Key_reads | 11 |

This server is doing approximately 7 Key_reads every ten seconds but it is hard to say that how many of them are random I/O and how many are just system calls (i.e. read from operrating system's cache)

Wednesday, 14 July 2010

upgrading to innodb plugin

As of MySQL 5.1.38, the InnoDB Plugin is included in MySQL 5.1 releases, in addition to the built-in version of InnoDB that has been included in previous releases. So all we need to do is just enable it, that is, add following options into my.cnf file

[mysqld]
..
ignore-builtin-innodb
plugin-load=innodb=ha_innodb_plugin.so

for further readings visit here http://dev.mysql.com/doc/refman/5.1/en/news-5-1-38.html

Assuming you have succesfully enabled Innodb Plugin, now what? would you automatically get benefits from all new Innodb plugin features?
The answer is yes/no, because Innodb Plugin supports new file format 'Barracuda', which introduces two such new data structures: compressed
, and long variable-length columns stored off-page. By default, the InnoDB Plugin does not create tables in a format that is incompatible with the built-in InnoDB in MySQL.
This means you won't get these two features automatically until you create new tables using new file format or convert exisiting tables to new format.

Innodb Plugin offers lots of other features too and these features are enabled by default, also you will automatically get benefit from these features such as fast index creation and lots of performance and scalability enhancments,
for full details click here
here http://www.innodb.com/doc/innodb_plugin-1.0/innodb-performance.html

Friday, 2 July 2010

Repair by keycache vs Repair by sorting

It is advised to disable keys on MYISAM/ARCHIVE tables before performing bulk insert operations should give a considerable speedup, especially when you have many indexes. And then enable keys to to re-create missing indexes, this task might take much longer than bulk insert operation, sometimes it may take days to finish.

One approach would be not to disable keys at first place but would this help at all? the answer is it may or may not. Personally I do not suggest this approach. If you find enable key task is taking longer then start new session with MySQL and check the state of enable keys command. If A SHOW PROCESSLIST reveal, not surprisingly, the dreaded "Repair with keycache", which seems to multiply indexing time by a factor of 20 to 30. If you want to see "repair by sort" to be used then theoretical max size of every single index must fit into myisam_max_sort_file_size.

Solution: increase myisam_max_sort_file_size

mysql> SET GLOBAL myisam_max_sort_file_size=100000 * 1024 * 1024

And restart enable keys

Similarly if your repair takes several hours then either use above solution or
you might think of using the utility 'myisamchk' with the -n (or --sort-recover) option.


Wednesday, 26 May 2010

Using SSL Connections with MySQL

It is very important to secure connection between MySQL Server and Clients. This can be achieved by using SSL connections between MySQL server and client program.

Assumptions:

1. MySQL that has been built with SSL support, and you are going to use OpenSSL.
2. Reader has basic Unix/MySQL Skills
3. Client program is running on windows machine (IP:192.168..0.5)
4. Server program is running on Centos/Redhat machine (IP:192.168.0.4)

Setting Up SSL Certificates for MySQL:

The following openssl commands will prompt you for several inputs. For testing, you can press Enter to all prompts.

1. Create new folder

shell> mkdir -p /etc/mysql
shell> chown -R mysql:mysql /etc/mysql/
shell> cd /etc/mysql/

2. Create CA certificate

shell> openssl genrsa 2048 > ca-key.pem

shell> openssl req -new -x509 -nodes -days 1000 -key ca-key.pem > ca-cert.pem

3. Create server certificate

shell> openssl req -newkey rsa:2048 -days 1000 -nodes -keyout server-key.pem > server-req.pem

shell> openssl x509 -req -in server-req.pem -days 1000 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 > server-cert.pem

4. Create client certificate

shell> openssl req -newkey rsa:2048 -days 1000 -nodes -keyout client-key.pem > client-req.pem

shell> openssl x509 -req -in client-req.pem -days 1000 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 > client-cert.pem

5. Add these lines into MySQL option file (i.e. /etc/my.cnf)

[mysqld]
..
ssl-ca=/etc/mysql/ca-cert.pem
ssl-cert=/etc/mysql/server-cert.pem
ssl-key=/etc/mysql/server-key.pem

6. Restart MySQL Instance

shell> /etc/init.d/mysql restart

7. Connect to MySQL server using mysql client program and check whether SSL support is enabled

shell> mysql -uroot -pxxxx



mysql> SHOW VARIABLES LIKE 'have_ssl'; 
+---------------+-------+ 
| Variable_name | Value | 
+---------------+-------+ 
| have_ssl      | YES   | 
+---------------+-------+

8. Create a user with REQUIRE SSL option


mysql> GRANT ALL ON test.* TO 'ssl_user'@'192.168.0.5' IDENTIFIED BY 'password' REQUIRE SSL;

9. Copy following files from /etc/mysql/ onto windows client on c:\ssl\

ca-cert.pem
client-cert.pem
client-key.pem

10. Test connectivity from windows O/S. From command prompt run this command


C:\>mysql -h192.168.0.4 -ussl_user -ppassword --ssl-ca c:\ssl\ca-cert.pem --ssl-cert
c:\ssl\client-cert.pem --ssl-key c:\ssl\client-key.pem

11. Once connected successfully, run this command to verify SSL connection


mysql> SHOW STATUS LIKE 'Ssl_cipher';
 +---------------+--------------------+ 
| Variable_name | Value              |
 +---------------+--------------------+ 
| Ssl_cipher    | DHE-RSA-AES256-SHA | 
+---------------+--------------------+

Done! 

Tuesday, 18 May 2010

Replication consistency check


How to verify that slaves are in sync with the master. There is an easy way to achieve this is to use mk-table-checksum. In this article I'll explain, how to verify slave database with master database.

Assumption:
=========

Master database server: 192.168.168.1
Slave database server: 192.168.168.2

Downloading
=========
we need to download mk-table-checksum onto slave database i.e. 192.168.168.2

wget http://www.maatkit.org/get/toolname/mk-table-checksum
chmod +x mk-table-checksum

Once downloaded, it is ready to use, no installation is needed.

Run this command: this will print connection information and exit

./mk-table-checksum 192.168.168.1 192.168.168.2 --user=root --ask-pass --chunk-size 500000 --databases test --explain-hosts
Output:
Server 192.168.168.1:
h=192.168.168.1,u=root
Server 192.168.168.2:
h=192.168.168.2,u=root

mk-table-checksum assumes the first server is the master and others are slave.
--user: user for login,
--ask-pass: prompt for a password when connecting to mysql servers, in this example it will prompt for both database servers
--chunk-size: Approximate number of rows or size of data to checksum at a time. In this example it's 50000
--database: checksum this comma-separated list of databases.In this example we decided to checksum only test database.

Now it's the time to run above command without --explain-hosts

./mk-table-checksum 192.168.168.1 192.168.168.2 --user=root --ask-pass --chunk-size 500000 --databases test

Output:

DATABASE TABLE CHUNK HOST ENGINE COUNT CHECKSUM TIME WAIT STAT LAG
allsaints table_1 0 192.168.168.2 MyISAM 2 53fa3b06 0 0 NULL NULL
allsaints table_1 0 192.168.168.1 MyISAM 2 53fa3b06 0 0 NULL NULL


DATABASE
The database the table is in.

TABLE
The table name.

CHUNK
The chunk (see --chunk-size). Zero if you are not doing chunked checksums.

HOST
The server's hostname.

ENGINE
The table's storage engine.

COUNT
The table's row count, unless you specified to skip it.

CHECKSUM
The table's checksum, unless you specified to skip it or the table has no rows. some types of checksums will be 0 if there are no rows; others will print NULL.

TIME
The time the actual checksum and/or counting took.

WAIT
How long the checksum blocked before beginning.

STAT
The return value of MASTER_POS_WAIT().

LAG
How far the slave lags the master, as reported by SHOW SLAVE STATUS.







Wednesday, 17 March 2010

INNODB Full Table Scan

Full table scan is faster on MyISAM tables but it has been discourged to use with INNODB tables, the reason for which is that innodb buffer pool is not table scan resistant. A full table scan on INNODB can have the affects of evicting a large portion of the useful pages out of the INNODB buffer pool . However there are times when it is desirable to fill the buffer pool with full table scans. On other hand, if you use mysqldump to backup INNODB tables then you have experienced performance issues. Because it perfroms full table scans and thus blow out buffer pool cache.
INNOBASE has offered fix for it in innodb plugin 1.0.4

1) innodb_old_blocks_time
2) innodb_old_blocks_pct

1) Just before performing table scans and dumps to prevent eviction of a large portion of usefull pages from buffer pool. You can set this variable at runtime

mysql>SET GLOBAL innodb_old_blocks_time=1000;
.. perfrom table scans
mysql> SET GLOBAL inndob_old_blocks_time=0;


2) Similarly we can decide how much space should be available for old blocks, the default value for which is 37% (approximatley i.e. 3/8 of the pool). This behavior can be controlled with innodb_old_blocks_pct runtime variable. The value for this variable should be altered carefully, as you might have lots of unwanted old pages in buffer pool.

Tuesday, 16 March 2010

Customized Logging Using MySQL Proxy

If you are using MySQL 5.0, it is not possible to enable general query log without restarting mysql instance. However there are number of ways to capture what is being sent to mysql server. For instance
a) DTrace- this is the option available if you are using Solaris.
b) MySQL Proxy- you can use this on all major platforms including microsoft windows

Today I'll focus on MySQL Proxy.

Assumptions:
=========
1) Reader has basic skills of unix
2) Reader will carry out these steps on Redhat5/Centos5 only.
3) Download location is /root/
4) MySQL Server is running on 127.0.0.1 ( i.e. localhost)

Installation:
========

1. Download latest version of MySQL Proxy from here
wget http://dev.mysql.com/get/Downloads/MySQL-Proxy/mysql-proxy-0.8.0-linux-rhel5-x86-64bit.tar.gz/from/http://mirrors.dedipower.com/www.mysql.com/
2. Decompress it
tar -xvzf mysql-proxy-0.8.0-linux-rhel5-x86-64bit.tar.gz
3. rename it
mv mysql-proxy-0.8.0-linux-rhel5-x86-64bit myqsql-proxy

Setup configuration file
=======================
Create text file using using your faviroute editor e.g. vi
shell# vi /etc/mysql-proxy

copy following text into /etc/mysql-proxy.conf

[mysql-proxy]
daemon=true
pid-file=/tmp/mysql-proxy.pid
log-file=/var/log/mysql-proxy.log
log-level=debug
proxy-backend-addresses=127.0.0.1:3306
proxy-lua-script=/root/mysql-proxy/include/custom_logging.lua

Permissions (Make it sure that mysql-proxy script should be able to read it)
=====================================================

shell# chmod 650 /etc/mysql-proxy.conf

Setup Lua script
================
copy following text into /mysql-proxy/include/customer_logging.lua

--------------------------------------------------------------------
local log_file = '/var/log/mysql_query.log'

local fh = io.open(log_file, "a+")

local tokenizer = require("proxy.tokenizer")

function read_query( packet )
if string.byte(packet) == proxy.COM_QUERY then
query = string.sub(packet, 2)
proxy.queries:append(1, packet, {resultset_is_needed = true} )
return proxy.PROXY_SEND_QUERY
else
query = ""
end
end

function read_query_result (inj)
local row_count = 0
local res = assert(inj.resultset)

local num_cols = string.byte(res.raw, 1)

if num_cols > 0 and num_cols < 255 then
for row in inj.resultset.rows do
row_count = row_count + 1
end
end

local error_status =""

if not res.query_status or res.query_status == proxy.MYSQLD_PACKET_ERR then
error_status = "[ERROR]"
end

if (res.affected_rows) then
row_count = res.affected_rows
end
--
-- write the query, adding the number of retrieved rows
--
local tokens = tokenizer.tokenize(inj.query:sub(2))
local norm_query = tokenizer.normalize(tokens)

fh:write( string.format("%s %6d --%s --%s query-time(%d) rows{%d} %s\n",
os.date('%Y-%m-%d %H:%M:%S'),
proxy.connection.server.thread_id,
proxy.connection.client.default_db,
query,
inj.query_time,
row_count,
error_status))
fh:flush()
end
---------------------------------------------------

Starting MySQL Proxy Server
=====================
Shell# /root/mysql-proxy/bin/mysql-proxy --defaults-file=/etc/mysql-proxy.conf

Testing
=====
shell# mysql -h127.0.0.1 -uroot -pxxx -P 4040 mysql> SHOW DATABASES;
mysql> SHOW TABLES FROM TEST;
mysql> exit;

See what appears in log file
===================
shell# tail -f /var/log/mysql_query.log
2010-03-16 16:14:38 58123 -- --SELECT DATABASE() query-time(128) rows{1}
2010-03-16 16:15:11 58123 --test --show tables query-time(252) rows{5}

Thursday, 25 February 2010

Backup using mysqlhotcopy script

I needed to setup a script to backup myisam tables, so I thought I shall publish it on web too

########################
# Create Backup Folder #
########################

NOW=$(date +"%m_%d_%Y")
BACKUP_LOCATION="/u02/backup/dev/$NOW/"
mkdir -p `echo $BACKUP_LOCATION`
cd /u02/backup/dev/$NOW

########################################
# Array of databases we want to backup #
########################################

ARRAY="bugs my_company knowledge_base mysql wikidb"

MYSQL_HOTCOPY="/usr/bin/mysqlhotcopy -p xxxxx -q "

echo "----BACKUP STARTED @`date`"

#################################################
# Iterate through Array and Backup each of them #
################################################

for DATABASE in `echo $ARRAY`
do
echo "Taking backup of $DATABASE.."
$MYSQL_HOTCOPY $DATABASE $BACKUP_LOCATION
echo "Done.."
echo "Compressing backup directory using tar"
/bin/tar cfP $DATABASE.tar $BACKUP_LOCATION/$DATABASE
echo "Done.."
echo "Delete backup directory after tar command"
rm -fR $DATABASE
echo "Done.."
echo "Compressing $DATABASE.tar file using gzip"
/bin/gzip $DATABASE.tar
echo "done"
done

echo " ----- END -----@`date` "

##################################

Friday, 12 February 2010

Backup MySQL Replication Database Server

Here is the script, which I use to backup MySQL Replication Slave database server.

HOST=localhost
USR=root
PASS=xxxxx
SOCKET=/tmp/mysql.sock

NOW=$(date +"%m_%d_%Y")
BACKUP_LOCATION=/path/to/backup/$NOW

mkdir -p `echo $BACKUP_LOCATION`

MYSQL="/usr/bin/mysql -h$HOST -u$USR -p$PASS -S $SOCKET -v"
MYSQLDUMP="/usr/bin/mysqldump -u$USR -p$PASS -S $SOCKET --all-databases"

# STOP_SLAVE
echo 'STOP SLAVE SQL_THREAD; FLUSH TABLES;'| $MYSQL

# GET_SLAVE_STATUS
echo "SHOW SLAVE STATUS\G" | $MYSQL > $BACKUP_LOCATION/bin_log_pos.log

# BACKUP_TABLE_STRUCTURES
$MYSQLDUMP --no-data --routines --result-file=$BACKUP_LOCATION/table_structure.sql

# BACKUP_RECORDS
$MYSQLDUMP --result-file=$BACKUP_LOCATION/all_records.sql

# START_SLAVE
echo "START SLAVE" | $MYSQL

####################

if you prefer, you can automate this script, add following line into crontab

0 4 * * * /path/to/script/backup.sh

Monday, 8 February 2010

MyISAM Table Maintenance

There are numerous ways to check/repair MyISAM tables

1. mysqlchk

2. mysqlcheck

3. myisam_recover

One should not use mysqlchk with online database, unlike mysqlcheck it directly accesses tables and may damage them. There are two alternative solutions

  1. MYISAM_RECOVER option: quick option to automate recovery but it should be used very carefully. As all live tables are accessed quite frequently, enabling this option may trigger many check/repair at once.
  2. CHECK TABLE: it is safer compared to myisam_recover option. Unlike myisam_chk it does not directly access tables.

It is not recommend that one should run extended table checks regularly. However it is highly recommended using medium/quick check options to ensure that all tables are okay. Furthermore one should be optimizing tables regularly, as this reduces risks associated with table errors. Unlike check table operations, optimize table operation takes much longer but the good news is that we can easily identify the table(s) that need optimization.

Friday, 5 February 2010

Running Multiple Instances of MySQL on Centos

Today I needed to setup multiple instances of mysql on centos5. To be honest there are numerous ways of doing the same. I have used very basic method of doing the same.

Before you follow these steps:

1. Setup one MySQL configuration file per instance e.g. /etc/my_1.cnf , /etc/my_2.cnf
2. Setup separate data directory for each instance
3. Different port number for each instance
4. Different PID file for each instance.

MySQL Configuration for 1st instance: /etc/my_1.cnf

#########################################

[mysqld]
datadir=/var/lib/mysql_1
port=3306

[mysqld_safe]
log-error=/var/log/mysqld_1.log
pid-file=/var/run/mysqld/mysqld_1.pid


MySQL Configuration for 2nd instance: /etc/my_2.cnf

#########################################

[mysqld]
datadir=/var/lib/mysql_2
port=3307

[mysqld_safe]
log-error=/var/log/mysqld_2.log
pid-file=/var/run/mysqld/mysqld_2.pid

Initializing and starting MySQL:

-- initializing database for 1st instance

shell# mysql_install_db --user=mysql --datadir=/var/lib/mysql_1

-- initializing database for 2nd instance

shell# mysql_install_db --user=mysql --datadir=/var/lib/mysql_2

-- Starting 1st instance

shell# mysqld_safe --user=mysql --defaults-file=/etc/my_1.cnf &

-- Starting 2nd instance

shell# mysqld_safe --user=mysql --defaults-file=/etc/my_2.cnf &



Thursday, 4 February 2010

Optimize tables that need optimization


This script looks for tables that needs optimization. The logic is to check 'Data_free' field of SHOW TABLE STATUS command , if it is not zero, this means table has unused bytes i.e. the table has got holes somewhere in the table. You can customize this script according to your need, that is, optimize found tables etc.

#!/bin/sh

# This script check tables in the database.
# Additionaly you can supply a filter to check only certain tables.

user=root
pass=xxxxx
db=test
#filter=
TMP=~/scripts/query.log
LOG=~/scripts/need_optimization.log
MAXBYTES=100

mysql="/usr/bin/mysql -u$user -p$pass -s $db"


# Ok now do the check on each table
echo "Checking database $db ..."
echo "show tables like \"${filter}\"" | $mysql | while read table
do
echo "SHOW TABLE STATUS LIKE \"${table}\"\G" | $mysql > $TMP
bytes=`grep Data_free $TMP | cut -f2 -d: | tr -d " "`

if [ $bytes -gt $MAXBYTES ]; then

echo "--------------------- Table Status ($table) `date`" >> $LOG
cat $TMP >> $LOG
echo "---------------------------------------------------------------------------" >> $LOG

fi

done

echo "Checking database $db finished..."

##############################

let's run this script

shell# ./db_optimize.sh

check what table(s) need optimization...

shell# more need_optimization.log

mysqlcheck automatic script

This script check tables in the database. Also It captures error messages into log file. you can customize this script to send an alert via email etc.



#!/bin/sh

# This script check tables in the database.
# Additionaly you can supply a filter to check only certain tables.

user=root
pass=xxxxxx
db=mydb
#filter=

mysql="/usr/bin/mysql -u$user -p$pass -s $db"
stats_method="SET myisam_stats_method=nulls_equal"

# Ok now do the check on each table
echo "Checking database $db ..."
echo "show tables like \"${filter}\"" | $mysql | while read table
do
echo "$stats_method;check table $table\G" | $mysql > ~/scripts/query_output.log
error=`cat ~/scripts/query_output.log | grep Error`
count=`expr length "$error"`

if [ "$count" != "0" ]; then

echo "--------------------- Check Table Report ($table) `date`" >> ~/scripts/bad_tables.log
tail --lines=4 ~/scripts/query_output.log >> ~/scripts/bad_tables.log
echo "------------------------------------------------------------------------------------" >> ~/scripts/bad_tables.log

fi
done

echo "Checking database $db finished..."

Now run the script

shell# ./check_table.sh

let's see if there is any bad table..

shell# more bad_tables.log

Wednesday, 20 January 2010

multi-threaded mysqldump

What a great feature! isn't it? let's see how we can use it.

Installation:
------------

1. We need to install latest version of perl ( version v5.8.8 or higher)
2. Download mk-parallel-dump script
i.e. wget http://www.maatkit.org/get/mk-parallel-dump
3. Test if it works!
shell# chmod +x mk-parallel-dump
shell# ./mk-parallel-dump --help

We have now successfully installed multi-threaded mysqldump tool! now its the time to learn it.

let's backup mysql database

[root@localhost maatkit]# ./mk-parallel-dump --user root --databases mysql
CHUNK TIME EXIT SKIPPED DATABASE.TABLE
db 0.49 0 0 mysql
all 0.49 0 0 -

[root@localhost maatkit]# ls -lh mysql/
total 476K
-rw-r--r-- 1 root root 654 Jan 20 14:39 00_columns_priv.sql
-rw-r--r-- 1 root root 1.6K Jan 20 14:39 00_db.sql
-rw-r--r-- 1 root root 351 Jan 20 14:39 00_func.sql
-rw-r--r-- 1 root root 325 Jan 20 14:39 00_help_category.sql
-rw-r--r-- 1 root root 230 Jan 20 14:39 00_help_keyword.sql
-rw-r--r-- 1 root root 243 Jan 20 14:39 00_help_relation.sql
-rw-r--r-- 1 root root 360 Jan 20 14:39 00_help_topic.sql
-rw-r--r-- 1 root root 1.5K Jan 20 14:39 00_host.sql
-rw-r--r-- 1 root root 750 Jan 20 14:39 00_procs_priv.sql
-rw-r--r-- 1 root root 1.6K Jan 20 14:39 00_proc.sql
-rw-r--r-- 1 root root 838 Jan 20 14:39 00_tables_priv.sql
-rw-r--r-- 1 root root 234 Jan 20 14:39 00_time_zone_leap_second.sql
-rw-r--r-- 1 root root 190 Jan 20 14:39 00_time_zone_name.sql
-rw-r--r-- 1 root root 232 Jan 20 14:39 00_time_zone.sql
-rw-r--r-- 1 root root 291 Jan 20 14:39 00_time_zone_transition.sql
-rw-r--r-- 1 root root 403 Jan 20 14:39 00_time_zone_transition_type.sql
-rw-r--r-- 1 root root 2.6K Jan 20 14:39 00_user.sql
-rw-r--r-- 1 root root 0 Jan 20 14:39 columns_priv.000000.sql
-rw-r--r-- 1 root root 197 Jan 20 14:39 db.000000.sql
-rw-r--r-- 1 root root 0 Jan 20 14:39 func.000000.sql
-rw-r--r-- 1 root root 1.2K Jan 20 14:39 help_category.000000.sql
-rw-r--r-- 1 root root 6.4K Jan 20 14:39 help_keyword.000000.sql
-rw-r--r-- 1 root root 7.9K Jan 20 14:39 help_relation.000000.sql
-rw-r--r-- 1 root root 376K Jan 20 14:39 help_topic.000000.sql
-rw-r--r-- 1 root root 0 Jan 20 14:39 host.000000.sql
-rw-r--r-- 1 root root 0 Jan 20 14:39 proc.000000.sql
-rw-r--r-- 1 root root 0 Jan 20 14:39 procs_priv.000000.sql
-rw-r--r-- 1 root root 0 Jan 20 14:39 tables_priv.000000.sql
-rw-r--r-- 1 root root 0 Jan 20 14:39 time_zone.000000.sql
-rw-r--r-- 1 root root 0 Jan 20 14:39 time_zone_leap_second.000000.sql
-rw-r--r-- 1 root root 0 Jan 20 14:39 time_zone_name.000000.sql
-rw-r--r-- 1 root root 0 Jan 20 14:39 time_zone_transition.000000.sql
-rw-r--r-- 1 root root 0 Jan 20 14:39 time_zone_transition_type.000000.sql
-rw-r--r-- 1 root root 783 Jan 20 14:39 user.000000.sql

Tuesday, 19 January 2010

innodb_file_per_table- move specific tables to separate physical disks

Use of innodb_file_per_table can be useful if you want to move specific database or table(s) to a separate disk.
To enable multiple table spaces, add a line to the [mysqld] section of my.cnf (or my.ini) file and restart mysql server.
[mysqld]
innodb_file_per_table

The new innodb tables will have their own .frm and .ibd files, this is similar to what Myisam storage engine does but it divides the tables into .myi and .myd.

In this post, I'll show you how to move specific database onto separate disk.

Assumptions:

- datadir= /var/lib/mysql
- you have basic linux skills

Steps:

log on to mysql

shell# mysql -uroot -p
# create database
mysql> create database my_db;
mysql> quit;

# stop mysql
shell# /etc/init.d/mysql stop

# create new directory onto separate disk
shell# mkdir /disk2/innodb

# move database onto separate disk
shell# cd /var/lib/mysql
shell# mv my_db /disk2/innodb

# create symlink
shell# ln -s /disk2/innodb/my_db my_db

# correct permissions
shell# chown -R mysql:mysql /disk2/innodb

# start mysql server
shell# /etc/init.d/mysql start

you have now successfully moved my_db database onto separate disk.