In MySQL hierarchical data is stored in the form of parent and child tables and often foreign key(s) are used to perform referential action e.g. to help keep data consistent, so it won't allow delete/update operation on a parent row being referenced in a child table. One way to address this issue is to use DELETE/UPDATE CASCADE option - Delete or update the row from the parent table, and automatically delete or update the matching rows in the child table. However, such operation is typically not considered when implementing foreign keys, it also has performance implications, referential checks are performed row by row and if a parent record has millions of related records in child table(s) then this operation would take considerably long to complete, furthermore, quoting from MySQL docs "cascading operation may not be nested more than 15 levels deep," , for more information visit http://dev.mysql.com/doc/refman/5.7/en/innodb-foreign-key-constraints.html
Another way to delete hierarchical data is to first identify all child tables (at all N levels deep), this could be difficult if you are dealing with a big schema (with 100s of tables) and then start delete operation in reverse order i.e. delete data from child table first.
Note: you can obtain information about foreign keys by querying the INFORMATION_SCHEMA.KEY_COLUMN_USAGE table.
http://dev.mysql.com/doc/refman/5.7/en/key-column-usage-table.html
To simplify such operation I have developed a program that can help delete hierarchical data:
https://github.com/aftabakhan/dbtools
database backups, replication, installation, performance tuning, High Availability
Showing posts with label mysql. Show all posts
Showing posts with label mysql. Show all posts
Thursday, 19 May 2016
Monday, 10 June 2013
Restrictions on Replication with GTIDs
I came across some issues when converting MySQL regular replication to replication with GTIDs.
Restrictions on Replication with GTIDs is documented here
2. Updates involving non-transactional storage engines - It is not possible to mix non-transactional tables (such as MYISAM) with innodb tables within the same transaction.
The same issue can arise if:
3. CREATE TABLE .... SELECT - It is not supported to perform such type of statements e.g.
Conclusion
As work around CREATE/DROP temporary table statements can be performed outside transaction and CREATE TABLE ... SELECT could be performed using two statements such as CREATE TABLE t1; INSERT INTO t1 SELECT. Similarly, developers would be forced to convert non-transnational tables (e.g. MYISAM) to INNODB. In short, it would be a major code change for the existing application.
Keywords used in this document:
MySQL replication - Replication enables data from one MySQL database server (the master) to be replicated to one or more MySQL database servers (the slaves). Replication is asynchronous by default.
GTID - A global transaction identifier (GTID) is a unique identifier created and associated with each transaction when it is committed on the server of origin (master). This identifier is unique not only to the server on which it originated, but is unique across all servers in a given replication setup. There is a 1-to-1 mapping between all transactions and all GTIDs.For more information about GTIDs click here
Restrictions on Replication with GTIDs is documented here
1. Temporary tables - CREATE and DROP TEMPORARY tables are not supported inside transaction. e.g.
START TRANSACTION; Query OK, 0 rows affected (0.00 sec) CREATE TEMPORARY TABLE t2 like t; ERROR 1787 (HY000): When ENFORCE_GTID_CONSISTENCY = 1, the statements CREATE TEMPORARY TABLE and DROP TEMPORARY TABLE can be executed in a non-transactional context only, and require that AUTOCOMMIT = 1.
2. Updates involving non-transactional storage engines - It is not possible to mix non-transactional tables (such as MYISAM) with innodb tables within the same transaction.
The same issue can arise if:
- When master and slave database use different storage engines for their respective versions of the same table. e.g. Table 'A' on master database use Innodb, but the same table on replication slave host use MyISAM storage engine with table 'A'.
- When MySQL configuration on master/slave hosts use different binary log format e.g. master host is configured to use 'ROW' format and slave host is using 'STATEMENT'
3. CREATE TABLE .... SELECT - It is not supported to perform such type of statements e.g.
create table t2 select * from t; ERROR 1786 (HY000): CREATE TABLE ... SELECT is forbidden when ENFORCE_GTID_CONSISTENCY = 1.
Conclusion
As work around CREATE/DROP temporary table statements can be performed outside transaction and CREATE TABLE ... SELECT could be performed using two statements such as CREATE TABLE t1; INSERT INTO t1 SELECT. Similarly, developers would be forced to convert non-transnational tables (e.g. MYISAM) to INNODB. In short, it would be a major code change for the existing application.
Thursday, 28 February 2013
MySQL 5.6 - Online DDL Operations
MySQL 5.6 supports several kinds of Online DDL operations on Innodb tables, so it is now possible to perform DML operations such as insert, update, delete and select operations on a table while DDL operation on the subject table is in progress. For more details I strongly advise you to read MySQL docs here and here. In this Blog I would demonstrate few quick examples:
a) Remove column from table 't1' - DML operations can proceed while DDL is in progress i.e. ALTER TABLE .. DROP column..
b) Add new column table 't1' - DML operations can proceed as long as they do not access the new column that is being added to the table:
a) Remove column from table 't1' - DML operations can proceed while DDL is in progress i.e. ALTER TABLE .. DROP column..
mysql >show processlist;
+-----+------+-----------+------+---------+------+----------------+----------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+-----+------+-----------+------+---------+------+----------------+----------------------------------+
| 67 | root | localhost | test | Query | 0 | init | show processlist |
| 604 | root | localhost | test | Query | 3 | altering table | alter table t1 drop column data2 |
+-----+------+-----------+------+---------+------+----------------+----------------------------------+
2 rows in set (0.00 sec)
mysql >insert into t1 values (null,'xxxx','yyyy');
Query OK, 1 row affected (0.00 sec)
mysql >select * from t1 where id=2;
+----+----------------------------------------------------+-------+
| id | data | data2 |
+----+----------------------------------------------------+-------+
| 2 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa | a |
+----+----------------------------------------------------+-------+
1 row in set (0.00 sec)
b) Add new column table 't1' - DML operations can proceed as long as they do not access the new column that is being added to the table:
mysql>show processlist;
+-----+------+-----------+------+---------+------+----------------+---------------------------------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+-----+------+-----------+------+---------+------+----------------+---------------------------------------------------------+
| 67 | root | localhost | test | Query | 0 | init | show processlist |
| 604 | root | localhost | test | Query | 38 | altering table | alter table t1 add column data2 varchar(20) default 'a' |
+-----+------+-----------+------+---------+------+----------------+---------------------------------------------------------+
2 rows in set (0.00 sec)
mysql>insert into test.t1 values (null,'xxxx');
Query OK, 1 row affected (0.02 sec)
mysql>insert into test.t1 values (null,'xxxx','yyyy');
ERROR 1136 (21S01): Column count doesn't match value count at row 1
Monday, 25 February 2013
Replication checksums in MySQL 5.6
Replication checksums is one of the listed new features of MySQL 5.6 , but data integrity might break as documented here replication-checksum. In reality this feature is not completely new to MySQL, in MySQL 5.5, and earlier replication checksums can be used by using SSL replication as a workaround. You can learn more about SSL replication here. Similarly you can see my blog post about SSL connections with MySQL here.
Please note, by default replication checksums are enabled. However following variables can be used to enable/disable checksums with master/relay logs:
a) Binlog_checksum - Its a dynamic variable. The default value is CRC32. This variable causes the master to write a checksum for each event in the binary log
When enabled binary log provides the following information:
The extra/new information that we see the above output such as
On the slave host:
b) slave_sql_verify_checksum: Its ENABLED by default. However, as per online docs (here), this option is disabled by default, so I take it as documentation bug. When this option is ENABLED, slave host would examine and verify checksums when reading the relay log.
Please note, by default replication checksums are enabled. However following variables can be used to enable/disable checksums with master/relay logs:
a) Binlog_checksum - Its a dynamic variable. The default value is CRC32. This variable causes the master to write a checksum for each event in the binary log
When enabled binary log provides the following information:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | #130225 15:33:56 server id 1 end_log_pos 384 CRC32 0x280a5e52 Intvar SET INSERT_ID=1/*!*/; #130225 15:33:56 server id 1 end_log_pos 499 CRC32 0x37e25d0a Query thread_id=1 exec_time=0 error_code=0 use `test`/*!*/; SET TIMESTAMP=1361806436/*!*/; insert into t1 values (null, 'mysql 5.6') /*!*/; # at 499 #130225 15:33:56 server id 1 end_log_pos 530 CRC32 0xf56d77c4 Xid = 17 COMMIT/*!*/; DELIMITER ; # End of log file ROLLBACK /* added by mysqlbinlog */; /*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/; |
CRC32 0x280a5e52
b) slave_sql_verify_checksum: Its ENABLED by default. However, as per online docs (here), this option is disabled by default, so I take it as documentation bug. When this option is ENABLED, slave host would examine and verify checksums when reading the relay log.
show global variables like 'slave_sql_verify_checksum'; +---------------------------+-------+ | Variable_name | Value | +---------------------------+-------+ | slave_sql_verify_checksum | ON | +---------------------------+-------+ 1 row in set (0.00 sec)
Monday, 23 July 2012
Innodb purging stalls
One of my clients reported an incident where they are getting 'Table Full' error against Innodb table. However, there seem to be ample free Innodb space available. So, when I looked at host server, I found they are running old mysql 5.0 release on 32bit platform, Innodb monitor output show:
History list length 2051529
..
Main thread process no. 9701, id 1111792528, state: sleeping
Oh Innodb purge thread isn't running, and there is no change in "history length".
Note: Purge thread never progresses when the main thread is in a 'sleep' state.
Related BUG 13847885. Quoting:
Deleting a huge amount of data from
Bug fixed in 5.0.97, 5.1.63, 5.5.23
Solution: Restart MySQL and upgrade MySQL version
History list length 2051529
..
Main thread process no. 9701, id 1111792528, state: sleeping
Oh Innodb purge thread isn't running, and there is no change in "history length".
Note: Purge thread never progresses when the main thread is in a 'sleep' state.
Related BUG 13847885. Quoting:
Deleting a huge amount of data from
InnoDB tables within a short time could cause the purge operation that removes delete-marked records to stall. This issue could result in unnecessary disk space use, but does not cause any problems with data integrity. If this issue causes a disk space shortage, restart the server to work around it. This issue is only likely to occur on 32-bit platformsBug fixed in 5.0.97, 5.1.63, 5.5.23
Solution: Restart MySQL and upgrade MySQL version
Saturday, 23 June 2012
MySQL - capturing diagnostic data
Sporadic problems such as occasional server stalls can be difficult to diagnose. It is important to capture good diagnostic data when the problem is clearly happening. And it gets frustrating when you don't know when next it will hit your database. So, we need to find a way to know when problem happens,Let's call it 'diagnostic trigger'. We need to find good indicator e.g. sudden sever's load spike and/or spike in status varaibles such as Threads_connected, Threads_running.
Thanks to perconna Toolkit, pt-stalk - Watches for a trigger condition to become true, and then collects data to help in diagnosing problems. I performed few quick test using pt-stalk:
Installation:
At the time of writing this article percona-toolki2.1.2 is the latest release:
wget http://www.percona.com/downloads/percona-toolkit/2.1.2/percona-toolkit-2.1.2-1.noarch.rpm
You might have to install following required by percona-toolkit:
yum install perl-Time-HiRes.x86_64
yum install perl-TermReadKey.x86_64
Quick Examples:
A) Collect diagnostic data when at least 40 threads are running,
# pt-stalk --function status --variable Threads_running \
--threshold 40 --prefix=mysql_trouble --prefix=mysql_trouble --cycles=2-- \
-uroot -pxxxx
....
2012_06_23_10_13_11 Check results: Threads_running=1, matched=no, cycles_true=0
2012_06_23_10_13_12 Check results: Threads_running=1, matched=no, cycles_true=0
2012_06_23_10_13_14 Check results: Threads_running=1, matched=no, cycles_true=0
2012_06_23_10_13_15 Check results: Threads_running=40, matched=yes, cycles_true=1
2012_06_23_10_13_16 Check results: Threads_running=41, matched=yes, cycles_true=2
2012_06_23_10_13_16 Collect triggered
2012_06_23_10_13_16 Collector PID 16113
2012_06_23_10_13_16 Sleeping 300 seconds after collect
And the stored diagnostic data is available in its default location:
# ls -lh /var/lib/pt-stalk/ total 572K -rw-r--r-- 1 root root 4.5K Jun 23 11:08 mysql_trouble-df -rw-r--r-- 1 root root 153 Jun 23 11:08 mysql_trouble-disk-space -rw-r--r-- 1 root root 16K Jun 23 11:08 mysql_trouble-diskstats ... -rw-r--r-- 1 root root 15K Jun 23 11:08 mysql_trouble-top -rw-r--r-- 1 root root 386 Jun 23 11:08 mysql_trouble-trigger -rw-r--r-- 1 root root 8.1K Jun 23 11:08 mysql_trouble-variables -rw-r--r-- 1 root root 992 Jun 23 11:08 mysql_trouble-vmstat -rw-r--r-- 1 root root 245 Jun 23 11:08 mysql_trouble-vmstat-overall
B) Collect data when at least 20 queries running in the 'copying to tmp table' State.
pt-stalk --function processlist --variable State \
--match Copying --threshold 20 --prefix=mysql_trouble --cycles=2 \
-- -uroot -pxxxxx
..
2012_06_23_11_54_19 Check results: State=21, matched=yes, cycles_true=1
2012_06_23_11_54_20 Check results: State=21, matched=yes, cycles_true=2
2012_06_23_11_54_20 Collect triggered
2012_06_23_11_54_21 Collector PID 9154
2012_06_23_11_54_21 Sleeping 300 seconds after collect
You will probably like to run this tool as daemon - Here’s a sample configuration file for triggering when there are more than 40 threads running at once:
# Config for pt-stalk variable=Threads_running cycles=2 # trigger if problem seen twice in a row dest=/var/log/mysql_diagnostics threshold=40 daemonize -- --user=monitoring --password=xxxxSyntax to run pt-stalk as daemon:# pt-stalk --config /etc/pt-stalk.cnfFor more information about configuration files click here
Saturday, 16 June 2012
MySQL - How to scale inserts
------- TRX HAS BEEN WAITING 9 SEC FOR THIS LOCK TO BE GRANTED:
TABLE LOCK table `alertxxxx/alertdata` trx id 0 685590474 lock mode AUTO-INC waiting
------------------
---TRANSACTION 0 685590465, ACTIVE 10 sec, process no 8457, OS thread id 1169045824 setting auto-inc lockWhy AUTO-INC lock? - When accessing the auto-increment counter, InnoDB uses a special table-level AUTO-INC lock that it keeps to the end of the current SQL statement, not to the end of the transaction. This basically cause all inserts into the same table to serialize. With single row inserts it is normally not too bad but could prevent scalability with multiple threads inserting Bug #16979. However, you can reach better through put (inserts per second) with less threads. So after dropping number of threads on both clusters by 50% initially - taking it to 20-20 sessions. The problem almost disappeared and when we further reduced number of threads to 10-10 sessions, the problem disappeared!
Beginning with MySQL 5.1.22 - new locking model introduced for handling Innodb auto-increment in InnoDB. There is a good article which talks about this here and here
Similarly, if you want to achieve fast insert performance, it can be interesting to load files instead of the loading the inserts one by one : it is 3 to 4 times faster. If the goal is a huge amount of data already known at that time, it is probably the best option.
Optimization:
1. Optimize database structure - Design your tables to minimize their space on the disk. This can result in huge improvements by reducing the amount of data written to and read from disk. Smaller tables normally require less main memory while their contents are being actively processed during query execution. Any space reduction for table data also results in smaller indexes that can be processed faster. Ensure columns have the right data types and require least amount of storage, you should try to use the most precise type in all cases. For example, if an integer column is used for values in the range from 1 to 99999, MEDIUMINT UNSIGNED is the best type. For more information about optimizing database structure click here. If you store large strings (TEXT) or BLOB, compression may help there is an article about it here
2. Innodb flush method - e.g. O_DIRECT, if used can help to avoid double buffering between the InnoDB buffer pool and the operating system's filesystem cache. MySQL reference manual explain this here.
[
3. Innodb thread concurrency - Keep the value low for this option variable (default 8 ok), however the correct value for this variable is dependent on environment and workload. This option variable is explained here
4. Innodb buffer pool - Innodb maintains a buffer pool for caching data and indexes in memory. Making the pool larger can improve performance by reducing the amount of disk I/O needed, here is the good article that explains here and here.
5. Innodb log file size- The larger the value, the less checkpoint flush activity is needed in the buffer pool, saving disk I/O. But larger log files also mean that recovery is slower in case of a crash. As of MySQL 5.5, recovery times have been greatly improved and the whole log file flushing algorithm has been improved. In 5.5 you generally want larger log files as recovery is improved. Article about selecting innodb log file size is here
6 Innodb_flush_log_at_trx_commit -. The default value of 1 is the value required for ACID compliance. You can achieve better performance by setting the value different from 1, but then you can lose at most one second worth of transactions in a crash. With a value of 0, any mysqld process crash can erase the last second of transactions. With a value of 2, then only an operating system crash or a power outage can erase the last second of transactions. However, InnoDB's crash recovery is not affected and thus crash recovery does work regardless of the value.
7. Filesystem selection and disk issues - 'xfs' is known to perform very well with MySQL. Also writing the redo logs, binary logs, data files in different physical disks is a good practice with a bigger gain than server configuration. RAID 10 is recommended for best performance, more detail about disk issue can be found here and here
Continue...
Labels:
innodb performance,
mysql,
mysql server tuning,
scale writes
Wednesday, 30 May 2012
Data compression
1. Use compress/uncompress function in the database
http://dev.mysql.com/doc/refman/5.1/en/encryption-functions.html#function_compress
2. Innodb-plugin offers table compression
http://dev.mysql.com/doc/innodb-plugin/1.0/en/innodb-compression-background.html
3. Compress/uncompress data in client not in the database. This will offload more stuff – CPU and network. . This approach uses CPU cycles for compression and uncompression on the client machine rather than the database server, which might be appropriate for a distributed application with many clients, or where the client machine has spare CPU cycles.
We performed benchmarks (using table structre mentioned on my previous post: binary-log-synchronization) to evaluate the impact of using above mentioned data compression techniques (1 & 2 only), test data was loaded using 8 parallel database sessions into original table, compressed table and as well as into original table but using compress function to compress "content" column.
| Table Format |
Time to load 80k records |
Data file size |
|---|---|---|
| Original Table |
0m57.515s | 1.6GB |
| Compressed Table: ROW_FORMAT =COMPRESSED |
1m9.613s | 648MB |
| Original Table using compress() function |
0m35.883s | 156MB |
The results show that inserting data in compressed format (i.e. using compress() function) is the fastest approach and it gives better compression than using innodb compressed table. Using compress() function we have the control to compress only selected columns whereas innodb compressed table compress the entire table.
We also perfomed benchmarks to read data from original table, compressed table and original table but using uncompress() function (to uncompress compressed data), total 800k lookups were perfomed using id=<
| Table | Time to perform 800k lookups |
|---|---|
| Compressed Table: row_format= compressed |
1m26.426s |
| Original Table using uncompress() function |
1m26.197s |
| Original table | 6m29.504s |
The test results show that lookups performed againts compressed tables are at least 6 times faster than original table. The reason for which is that compressed table is much smaller than original table thus most of the data can fit into memory and hence reducing disk I/Os.
Labels:
data compression,
innodb,
innodb plugin,
mysql,
mysql performance
Wednesday, 18 April 2012
Binary log Synchronization
I performed some load tests to see how it may hurt database performance when the battery-backed disk cache is missing:
Total 40,000 records inserted into table using 4 parallel db connections:
|
|---|
sync_binlog with value of 3 simply means: only fsync binlog every third transaction.
The higher you increase the value of sync_binlog, the closer the results will be to sync_binlog=0 (no fsyncing).
Value of 1 is safest of course, but as you see, without decent battery backed write caching controller, performance is very bad.
Test case :
Tools used:
-- MySQL 5.1.56
-- mysqlslap
-- platform -
Red Hat 4.1.2-50 x86_64
, RAID 5 (8 disks, 67.75 GB each, Vendor DELL)
, Intel(R) Xeon(R) CPU 5148 @ 2.33GHz, 4 CPUs
, 8G RAM
# Connect to MySQL client and perform:
drop database if exists test_case_9;
create database test_case_9;
use test_case_9;
CREATE TABLE t (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`content` blob,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=800001 DEFAULT CHARSET=utf8;
drop function if exists insert_data;
delimiter //
CREATE DEFINER=`root`@`localhost` FUNCTION `insert_data`() RETURNS int(11)
DETERMINISTIC
BEGIN
INSERT INTO t(id,content) values (null,REPEAT('bla bla',5000));
RETURN ROW_COUNT();
END
//
delimiter ;
-- Change sync_binlog to value 0
mysql> set global sync_binlog=0;
mysql> quit
# Run following steps from a shell prompt
-- load data using mysqlslap tool
$ time mysqlslap --user=root --query="select test_case_9.insert_data();" --concurrency=4 --iterations=10000;
Benchmark
Average number of seconds to run all queries: 0.001 seconds
Minimum number of seconds to run all queries: 0.001 seconds
Maximum number of seconds to run all queries: 0.229 seconds
Number of clients running queries: 4
Average number of queries per client: 1
real 0m17.972s
user 0m1.306s
sys 0m3.096s
A) -- Change sync_binlog to value 1 and empty test table 't'
$ mysql -uroot -S /mnt/database/mysql_5.1/mysql.sock test_case_9 -e"truncate table t; set global sync_binlog=1;"
B) -- load data using mysqlslap tool
$ time mysqlslap --user=root --query="select test_case_9.insert_data();" --concurrency=4 --iterations=10000;
Benchmark
Average number of seconds to run all queries: 0.006 seconds
Minimum number of seconds to run all queries: 0.003 seconds
Maximum number of seconds to run all queries: 0.018 seconds
Number of clients running queries: 4
Average number of queries per client: 1
real 1m4.239s
user 0m1.672s
sys 0m4.203s
Repeat steps A and B to run load tests using Value 2, 3, and 4 with synch_binlog
Tuesday, 6 March 2012
MySQL - Event Scheduler
MySQL Events are tasks that run according to a schedule. Therefore, we sometimes refer to them as scheduled events. Basically an event is a named database object containing one or more SQL statements to be executed at one or more regular intervals, beginning and ending at a specific date and time. Conceptually, this is similar to the idea of the Unix crontab (also known as a “cron job”) or the Windows Task Scheduler.
Here is the summary of Event object features:
- An event is uniquely identified by its name and the schema to which it is assigned
- An event performs a specific action according to a schedule. This action consists of an SQL statement, which can be a compound statement in BEGIN .. END;
- An event's timing can be either one-time or recurrent. A one-time event executes one time only. A recurrent event repeats its action at a regular interval.
For further details, visit :http://dev.mysql.com/doc/refman/5.1/en/events-overview.html
Let's create an Event object to increment 'counter' column of event_test table every minute:
No it has not worked, because event scheduler thread is not ON. The global event_scheduler system variable determines whether the Event Scheduler is enabled and running on the server.
To enable Event Scheduler use this sql:
For more information about event scheduler, see http://dev.mysql.com/doc/refman/5.1/en/events-configuration.html
When the Event Scheduler is ON, the event scheduler thread is listed in the output of SHOW PROCESSLIST as a daemon process, and its state is represented as shown here:
Lets check the status of 'counter' after truning event Scheduler ON
To disable my_event, use this ALTER EVENT statement:
Here is the summary of Event object features:
- An event is uniquely identified by its name and the schema to which it is assigned
- An event performs a specific action according to a schedule. This action consists of an SQL statement, which can be a compound statement in BEGIN .. END;
- An event's timing can be either one-time or recurrent. A one-time event executes one time only. A recurrent event repeats its action at a regular interval.
For further details, visit :http://dev.mysql.com/doc/refman/5.1/en/events-overview.html
Let's create an Event object to increment 'counter' column of event_test table every minute:
mysql> CREATE EVENT my_event ON SCHEDULE EVERY 1 MINUTE DO UPDATE event_test set counter=counter+1; Query OK, 0 rows affected (0.00 sec)Metadata about events can be obtained as follows:
mysql> SHOW CREATE EVENT my_event\G
*************************** 1. row ***************************
Event: my_event
sql_mode:
time_zone: SYSTEM
Create Event: CREATE EVENT `my_event`
ON SCHEDULE EVERY 1 MINUTE STARTS '2012-03-06 16:46:01'
ON COMPLETION NOT PRESERVE ENABLE
DO update event_test set counter=counter+1
character_set_client: latin1
collation_connection: latin1_swedish_ci
Database Collation: latin1_swedish_ci
1 row in set (0.00 sec)
Since this is a repeating event and may not terminate within its scheduling interval, the result may be multiple instances of the event executing simultaneously. Which is undesirable, use GET_LOCK() function or LOCK TABLE to avoid this.Check if the 'my_event' has actually worked
mysql> SELECT SLEEP(60); SELECT * FROM event_test; +-----------+ | SLEEP(60) | +-----------+ | 0 | +-----------+ 1 row in set (1 min 0.01 sec) +---------+ | counter | +---------+ | 0 | +---------+ 1 row in set (0.00 sec)
mysql> select @@GLOBAL.event_scheduler ; +--------------------------+ | @@GLOBAL.event_scheduler | +--------------------------+ | OFF | +--------------------------+ 1 row in set (0.00 sec)
mysql> set GLOBAL event_scheduler =ON; Query OK, 0 rows affected (0.00 sec)
When the Event Scheduler is ON, the event scheduler thread is listed in the output of SHOW PROCESSLIST as a daemon process, and its state is represented as shown here:
mysql> SHOW PROCESSLIST; | 43 | event_scheduler | localhost | NULL | Daemon | 35 | Waiting for next activation | NULL | +----+-----------------+-----------+------+---------+------+
mysql> SELECT SLEEP(60); SELECT * FROM event_test; +-----------+ | SLEEP(60) | +-----------+ | 0 | +-----------+ 1 row in set (1 min 0.00 sec) +---------+ | counter | +---------+ | 1 | +---------+ 1 row in set (0.00 sec) mysql> SELECT SLEEP(60); SELECT * FROM event_test; +-----------+ | SLEEP(60) | +-----------+ | 0 | +-----------+ 1 row in set (1 min 0.01 sec) +---------+ | counter | +---------+ | 2 | +---------+ 1 row in set (0.00 sec)
ALTER EVENT my_event
DISABLE;It was just a simple example to show,
that how to automate database task inside MySQL server.
Tuesday, 9 November 2010
Short index length (good or bad)
If you find the work load on DB server is disk bound and you do not have enough memory to increase innodb buffer pool size. We can help improve the performance by reducing the length of indexes, this will result more indexes fit into memory and would increase write operations. It may impact SELECT queries but not necessarly for bad. Allow me to show some example. I will lower the index length of my test table from 40 char to just 8 char:
Note: You may consider trying different lengths for your index and check which best fit your workload (both writing and reading speed).
What if look for a specific value?
SELECT statements does a lookup for a value that have very low cardinality (for example, if we had millions of rows that starts with the same first 16 char "a816d6ce93c2aa99"), then a short index is not very efficient.
On the other hand, a short index length allows to fit more indexes value in memory, increasing the lookup speed.
In my test table, I can decrease the index length to 4 and still have good performance. For example:
To summarize:
if you have good cardinality, a shorter index length will increase both writes and reads operations, especially if you are IO bound.
Note: You may consider trying different lengths for your index and check which best fit your workload (both writing and reading speed).
ALTER TABLE csc52021 DROP INDEX value_40, ADD INDEX value_8(value(8));
To look all the rows that have a value between 'aaa' and 'b':mysql> EXPLAIN SELECT COUNT(*) FROM csc52021 WHERE value>'aaa%' AND
mysql> <'b';
+----+-------------+----------+-------+---------------+---------+---------+------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+----------+-------+---------------+---------+---------+------+------+-------------+
| 1 | SIMPLE | csc52021 | range | value_8 | value_8 | 11 | NULL | 9996 | Using where |
+----+-------------+----------+-------+---------------+---------+---------+------+------+-------------+
1 row in set (0.00 sec)
mysql> SELECT COUNT(*) FROM csc52021 WHERE value>'aaa%' AND value<'b';
+----------+
| COUNT(*) |
+----------+
| 5533 |
+----------+
1 row in set (0.02 sec)
As you can see, the index is used.What if look for a specific value?
mysql> EXPLAIN SELECT COUNT(*) FROM csc52021 WHERE
mysql> value='a816d6ce93c2aa992829f7d0d9357db896d5e7de';
+----+-------------+----------+------+---------------+---------+---------+-------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+----------+------+---------------+---------+---------+-------+------+-------------+
| 1 | SIMPLE | csc52021 | ref | value_8 | value_8 | 11 | const | 1 | Using where |
+----+-------------+----------+------+---------------+---------+---------+-------+------+-------------+
1 row in set (0.00 sec)
mysql> SELECT COUNT(*) FROM csc52021 WHERE
mysql> value='a816d6ce93c2aa992829f7d0d9357db896d5e7de';
+----------+
| COUNT(*) |
+----------+
| 1 |
+----------+
1 row in set (0.00 sec)
SELECT statements does a lookup for a value that have very low cardinality (for example, if we had millions of rows that starts with the same first 16 char "a816d6ce93c2aa99"), then a short index is not very efficient.
On the other hand, a short index length allows to fit more indexes value in memory, increasing the lookup speed.
In my test table, I can decrease the index length to 4 and still have good performance. For example:
mysql> ALTER TABLE csc52021 DROP INDEX value_8, ADD INDEX
mysql> value_4(value(4));
mysql> EXPLAIN SELECT COUNT(*) FROM csc52021 WHERE
mysql> value='a816d6ce93c2aa992829f7d0d9357db896d5e7de';
+----+-------------+----------+------+---------------+---------+---------+-------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+----------+------+---------------+---------+---------+-------+------+-------------+
| 1 | SIMPLE | csc52021 | ref | value_4 | value_4 | 7 | const | 2 | Using where |
+----+-------------+----------+------+---------------+---------+---------+-------+------+-------------+
1 row in set (0.00 sec)
mysql> EXPLAIN SELECT COUNT(*) FROM csc52021 WHERE value LIKE 'a816%';
+----+-------------+----------+-------+---------------+---------+---------+------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+----------+-------+---------------+---------+---------+------+------+-------------+
| 1 | SIMPLE | csc52021 | range | value_4 | value_4 | 7 | NULL | 2 | Using where |
+----+-------------+----------+-------+---------------+---------+---------+------+------+-------------+
1 row in set (0.00 sec)
mysql> SELECT COUNT(*) FROM csc52021 WHERE value LIKE 'a816%';
+----------+
| COUNT(*) |
+----------+
| 2 |
+----------+
1 row in set (0.00 sec)
To summarize:
if you have good cardinality, a shorter index length will increase both writes and reads operations, especially if you are IO bound.
Labels:
innodb performance,
myisam,
mysql,
short index length
Thursday, 29 July 2010
Backup mysql binary logs
It is always good to make a backup of all the log files you are about to delete. Alternatively if you take incremental backups then you should rotate the binary log by using FLUSH LOGS. This done, you need to copy to the backup location all binary logs which range from the one of the moment of the last full or incremental backup to the last but one. These binary logs are the incremental backup
Here is the bash script, which you can use to backup binary logs, all you need to do is change following param according to your needs and all yours. This script is not mine, I got the idea from here:
#
# This script backup binary log files
#
backup_user=dba
backup_password=xxxx
backup_port=3306
backup_host=localhost
log_file=/var/log/binlog_backup.log
binlog_dir=/mnt/database/logs # Path to binlog
backup_dir=/mnt/archive/binlogs/tench # Path to Backup directory
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin
export PATH
Log()
{
echo "`date` : $*" >> $log_file
}
mysql_options()
{
common_opts="--user=$backup_user --password=$backup_password"
if [ "$backup_host" != "localhost" ]; then
common_opts="$common_opts --host=$backup_host --port=$backup_port"
fi
}
mysql_command()
{
mysql $common_opts --batch --skip-column-names -e "$2"
}
mysql_options
Log "[INIT] Starting MySQL binlog backup"
Log "Flushing MySQL binary logs (FLUSH LOGS)"
mysql_command mysql "flush logs"
master_binlog=`mysql_command mysql "show master status" 2>/dev/null | cut -f1`
Log "Current binary log is: $master_binlog"
copy_status=0
for b in `mysql_command mysql "show master logs" | cut -f1`
do
if [ -z $first_log ]; then
first_log=$b
fi
if [ $b != $master_binlog ]; then
Log "Copying binary log ${b} to ${backup_dir}"
rsync -av $backup_host:/$binlog_dir/$b $backup_dir >& /dev/null
if [ $? -ne 0 ]; then
copy_status=1
break
fi
else
break
fi
done
if [ $copy_status -eq 1 ]; then
Log "[ERR] Failed to copy binary logs cleanly...aborting"
exit 1
fi
Friday, 23 July 2010
Key cache hit ratio
When we talk about key cache performance we usually look at these two status variables.
1. Key_read_requests
The number of requests to read a key block from the cache.
2. Key_reads
The number of physical reads of a key block from disk.
There is a good reason to examine Key_reads, because we know that disks are very slow relative to RAM but the Key_reads aren't always physical disk reads at all. If the requested block of data isn't in the operating system's cache, then a Key_read is a disk read, you will be lucky if it is cached, then it's just a system call. Having said that it is always good to minimize key_reads which may cause randon disk I/O.
The optimum solution is to keep the ratio Key_reads : Key_read_requests should be 1:100 and Key_writes / Key_write_requests should always be less than 1.
Finally I'll like to show you something partially useful you can do with Key_reads:
[xxxxxx ~]$ mysqladmin -uroot -p ext -ri10 | grep Key_reads
Enter password:
| Key_reads | 44605148 |
| Key_reads | 4 |
| Key_reads | 4 |
| Key_reads | 13 |
| Key_reads | 9 |
| Key_reads | 6 |
| Key_reads | 20 |
| Key_reads | 6 |
| Key_reads | 11 |
This server is doing approximately 7 Key_reads every ten seconds but it is hard to say that how many of them are random I/O and how many are just system calls (i.e. read from operrating system's cache)
1. Key_read_requests
The number of requests to read a key block from the cache.
2. Key_reads
The number of physical reads of a key block from disk.
There is a good reason to examine Key_reads, because we know that disks are very slow relative to RAM but the Key_reads aren't always physical disk reads at all. If the requested block of data isn't in the operating system's cache, then a Key_read is a disk read, you will be lucky if it is cached, then it's just a system call. Having said that it is always good to minimize key_reads which may cause randon disk I/O.
The optimum solution is to keep the ratio Key_reads : Key_read_requests should be 1:100 and Key_writes / Key_write_requests should always be less than 1.
Finally I'll like to show you something partially useful you can do with Key_reads:
[xxxxxx ~]$ mysqladmin -uroot -p ext -ri10 | grep Key_reads
Enter password:
| Key_reads | 44605148 |
| Key_reads | 4 |
| Key_reads | 4 |
| Key_reads | 13 |
| Key_reads | 9 |
| Key_reads | 6 |
| Key_reads | 20 |
| Key_reads | 6 |
| Key_reads | 11 |
This server is doing approximately 7 Key_reads every ten seconds but it is hard to say that how many of them are random I/O and how many are just system calls (i.e. read from operrating system's cache)
Labels:
key buffer hit ratio,
key cache,
key cache hit ratio,
myisam,
mysql
Wednesday, 14 July 2010
upgrading to innodb plugin
As of MySQL 5.1.38, the InnoDB Plugin is included in MySQL 5.1 releases, in addition to the built-in version of InnoDB that has been included in previous releases. So all we need to do is just enable it, that is, add following options into my.cnf file
[mysqld]
..
ignore-builtin-innodb
plugin-load=innodb=ha_innodb_plugin.so
for further readings visit here http://dev.mysql.com/doc/refman/5.1/en/news-5-1-38.html
Assuming you have succesfully enabled Innodb Plugin, now what? would you automatically get benefits from all new Innodb plugin features?
The answer is yes/no, because Innodb Plugin supports new file format 'Barracuda', which introduces two such new data structures: compressed
, and long variable-length columns stored off-page. By default, the InnoDB Plugin does not create tables in a format that is incompatible with the built-in InnoDB in MySQL.
This means you won't get these two features automatically until you create new tables using new file format or convert exisiting tables to new format.
Innodb Plugin offers lots of other features too and these features are enabled by default, also you will automatically get benefit from these features such as fast index creation and lots of performance and scalability enhancments,
for full details click here
here http://www.innodb.com/doc/innodb_plugin-1.0/innodb-performance.html
[mysqld]
..
ignore-builtin-innodb
plugin-load=innodb=ha_innodb_plugin.so
for further readings visit here http://dev.mysql.com/doc/refman/5.1/en/news-5-1-38.html
Assuming you have succesfully enabled Innodb Plugin, now what? would you automatically get benefits from all new Innodb plugin features?
The answer is yes/no, because Innodb Plugin supports new file format 'Barracuda', which introduces two such new data structures: compressed
, and long variable-length columns stored off-page. By default, the InnoDB Plugin does not create tables in a format that is incompatible with the built-in InnoDB in MySQL.
This means you won't get these two features automatically until you create new tables using new file format or convert exisiting tables to new format.
Innodb Plugin offers lots of other features too and these features are enabled by default, also you will automatically get benefit from these features such as fast index creation and lots of performance and scalability enhancments,
for full details click here
here http://www.innodb.com/doc/innodb_plugin-1.0/innodb-performance.html
Friday, 2 July 2010
Repair by keycache vs Repair by sorting
It is advised to disable keys on MYISAM/ARCHIVE tables before performing bulk insert operations should give a considerable speedup, especially when you have many indexes. And then enable keys to to re-create missing indexes, this task might take much longer than bulk insert operation, sometimes it may take days to finish.
One approach would be not to disable keys at first place but would this help at all? the answer is it may or may not. Personally I do not suggest this approach. If you find enable key task is taking longer then start new session with MySQL and check the state of enable keys command. If A SHOW PROCESSLIST reveal, not surprisingly, the dreaded "Repair with keycache", which seems to multiply indexing time by a factor of 20 to 30. If you want to see "repair by sort" to be used then theoretical max size of every single index must fit into myisam_max_sort_file_size.
Solution: increase myisam_max_sort_file_size
mysql> SET GLOBAL myisam_max_sort_file_size=100000 * 1024 * 1024
And restart enable keys
Similarly if your repair takes several hours then either use above solution or
you might think of using the utility 'myisamchk' with the -n (or --sort-recover) option.
Labels:
myisam,
mysql,
repair by keycache,
repair by sorting,
repair table
Wednesday, 26 May 2010
Using SSL Connections with MySQL
It is very important to secure connection between MySQL Server and Clients. This can be achieved by using SSL connections between MySQL server and client program.
Assumptions:
1. MySQL that has been built with SSL support, and you are going to use OpenSSL.
2. Reader has basic Unix/MySQL Skills
3. Client program is running on windows machine (IP:192.168..0.5)
4. Server program is running on Centos/Redhat machine (IP:192.168.0.4)
Setting Up SSL Certificates for MySQL:
The following openssl commands will prompt you for several inputs. For testing, you can press Enter to all prompts.
1. Create new folder
shell> mkdir -p /etc/mysql
shell> chown -R mysql:mysql /etc/mysql/
shell> cd /etc/mysql/
2. Create CA certificate
shell> openssl genrsa 2048 > ca-key.pem
shell> openssl req -new -x509 -nodes -days 1000 -key ca-key.pem > ca-cert.pem
3. Create server certificate
shell> openssl req -newkey rsa:2048 -days 1000 -nodes -keyout server-key.pem > server-req.pem
shell> openssl x509 -req -in server-req.pem -days 1000 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 > server-cert.pem
4. Create client certificate
shell> openssl req -newkey rsa:2048 -days 1000 -nodes -keyout client-key.pem > client-req.pem
shell> openssl x509 -req -in client-req.pem -days 1000 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 > client-cert.pem
5. Add these lines into MySQL option file (i.e. /etc/my.cnf)
[mysqld]
..
ssl-ca=/etc/mysql/ca-cert.pem
ssl-cert=/etc/mysql/server-cert.pem
ssl-key=/etc/mysql/server-key.pem
6. Restart MySQL Instance
shell> /etc/init.d/mysql restart
7. Connect to MySQL server using mysql client program and check whether SSL support is enabled
shell> mysql -uroot -pxxxx
mysql> SHOW VARIABLES LIKE 'have_ssl';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| have_ssl | YES |
+---------------+-------+
8. Create a user with REQUIRE SSL option
mysql> GRANT ALL ON test.* TO 'ssl_user'@'192.168.0.5' IDENTIFIED BY 'password' REQUIRE SSL;
9. Copy following files from /etc/mysql/ onto windows client on c:\ssl\
ca-cert.pem
client-cert.pem
client-key.pem
10. Test connectivity from windows O/S. From command prompt run this command
C:\>mysql -h192.168.0.4 -ussl_user -ppassword --ssl-ca c:\ssl\ca-cert.pem --ssl-cert
c:\ssl\client-cert.pem --ssl-key c:\ssl\client-key.pem
11. Once connected successfully, run this command to verify SSL connection
mysql> SHOW STATUS LIKE 'Ssl_cipher';
+---------------+--------------------+
| Variable_name | Value |
+---------------+--------------------+
| Ssl_cipher | DHE-RSA-AES256-SHA |
+---------------+--------------------+
Done!
Subscribe to:
Posts (Atom)