It's been up for a few days and runs fast/well, but twice now it has suddenly locked up with NO activity at all - all of the Innodb thread slots have tasks which seem stuck, and hundreds of other threads waiting - this goes on forever, like 1-2 hours at CPU 100% idle, no queries executing, etc. until we discovered FLUSH TABLES will fix it.
http://forums.mysql.com/read.php?22,273419,274069#msg-274069
Reason: You may need to increase the value of the table_cache variable. You can check whether you need to increase the table cache by checking the Opened_tables status variable. If the value is very large or increases rapidly, even when you have not issued many FLUSH TABLES statements, you should increase the table cache size
Solution: increase table_cache
database backups, replication, installation, performance tuning, High Availability
Tuesday, 18 August 2009
InnoDB freeze/lock-up
It's been up for a few days and runs fast/well, but twice now it has suddenly >locked up with NO activity at all - all of the Innodb thread slots have tasks which seem stuck, and hundreds of other threads waiting - this goes on forever, like 1-2 hours at CPU 100% idle, no queries executing, etc. until we discovered FLUSH TABLES will fix it।
रासों :
Wednesday, 1 July 2009
Manually DB Creation with 11G
1) Set your ORACLE_SID
shell>export ORACLE_SID=test
2) Create pfile
shell> vi $ORACLE_HOME/dbs/inittest.ora
control_files = (/u01/app/oracle/product/11.1.0/db_1/dbs/control_test_1.ctl,/u01/app/oracle/product/11.1.0/db_1/dbs/control_test_2.ctl,/u01/app/oracle/product/11.1.0/db_1/dbs/control_test_3.ctl)
undo_management = AUTO
undo_tablespace = ts_undo
db_name = test
db_block_size = 8192
sga_max_size = 524288000 # 500M
sga_target = 524288000 #500M
3) Create a password file
#$ORACLE_HOME/bin/orapwd file=$ORACLE_HOME/dbs/pwdtest.ora password=oracle entries=5
4) Start the instance
sqlplus / as sysdba
startup nomount
5) Create the database
create database test
user sys identified by gd123
user system identified by gd123
controlfile reuse
maxdatafiles 22
maxinstances 2
character set we8iso8859p1
national character set utf8
--set default bigfile tablespace
-- database logging clauses
logfile
group 1 ('/u01/app/oracle/oradata/test/redo_test_1a.log', '/u01/app/oracle/oradata/test/redo_test_2a.log') size 4M,
group 2 ('/u01/app/oracle/oradata/test/redo_test_1b.log', '/u01/app/oracle/oradata/test/redo_test_2b.log') size 4M
--maxlogfiles 3
--maxlogmembers 7
maxloghistory 1000
archivelog
force logging
-- Tablespace Clauses
extent management local
datafile '/u01/app/oracle/oradata/test/system.dbf' size 300M autoextend ON
sysaux datafile '/u01/app/oracle/oradata/test/sysaux.dbf' size 100M autoextend on
default tablespace ts_test_data datafile '/u01/app/oracle/oradata/test/data.dbf' size 1M autoextend on next 128K maxsize 2M extent management local
default temporary tablespace ts_test_temp tempfile '/u01/app/oracle/oradata/test/temp.dbf' size 2M extent management local
undo tablespace ts_undo datafile '/u01/app/oracle/oradata/test/undo.dbf' size 20M
--
set time_zone = '-01:00';
6) Run catalog and catproc
@?/rdbms/admin/catalog.sql
@?/rdbms/admin/catproc.sql
shell>export ORACLE_SID=test
2) Create pfile
shell> vi $ORACLE_HOME/dbs/inittest.ora
control_files = (/u01/app/oracle/product/11.1.0/db_1/dbs/control_test_1.ctl,/u01/app/oracle/product/11.1.0/db_1/dbs/control_test_2.ctl,/u01/app/oracle/product/11.1.0/db_1/dbs/control_test_3.ctl)
undo_management = AUTO
undo_tablespace = ts_undo
db_name = test
db_block_size = 8192
sga_max_size = 524288000 # 500M
sga_target = 524288000 #500M
3) Create a password file
#$ORACLE_HOME/bin/orapwd file=$ORACLE_HOME/dbs/pwdtest.ora password=oracle entries=5
4) Start the instance
sqlplus / as sysdba
startup nomount
5) Create the database
create database test
user sys identified by gd123
user system identified by gd123
controlfile reuse
maxdatafiles 22
maxinstances 2
character set we8iso8859p1
national character set utf8
--set default bigfile tablespace
-- database logging clauses
logfile
group 1 ('/u01/app/oracle/oradata/test/redo_test_1a.log', '/u01/app/oracle/oradata/test/redo_test_2a.log') size 4M,
group 2 ('/u01/app/oracle/oradata/test/redo_test_1b.log', '/u01/app/oracle/oradata/test/redo_test_2b.log') size 4M
--maxlogfiles 3
--maxlogmembers 7
maxloghistory 1000
archivelog
force logging
-- Tablespace Clauses
extent management local
datafile '/u01/app/oracle/oradata/test/system.dbf' size 300M autoextend ON
sysaux datafile '/u01/app/oracle/oradata/test/sysaux.dbf' size 100M autoextend on
default tablespace ts_test_data datafile '/u01/app/oracle/oradata/test/data.dbf' size 1M autoextend on next 128K maxsize 2M extent management local
default temporary tablespace ts_test_temp tempfile '/u01/app/oracle/oradata/test/temp.dbf' size 2M extent management local
undo tablespace ts_undo datafile '/u01/app/oracle/oradata/test/undo.dbf' size 20M
--
set time_zone = '-01:00';
6) Run catalog and catproc
@?/rdbms/admin/catalog.sql
@?/rdbms/admin/catproc.sql
Monday, 8 June 2009
MySQL Cluster - Windows/Linux Platform (Mixed setup)
In this article I would install Data nodes on mix platforms i.e. one data node on linux (Centos5) and second node on windows xp 2003.
For this setup I have used three Servers
1. 192.168.35.22 ( Solaris 10, for running Management Node)
2. 192.168.35.78 ( Data Node, Centos 5)
3. 192.168.35.74 ( Data Node, Windows XP 2003)
Steps to install Data Node onto Windows XP ( 192.168.35.74)
1) Get the latest mysql cluster binaries for windows.
http://dev.mysql.com/downloads/cluster/7.0.html#win32
2) Run Windows MSI Installer and follow the steps but don't configure/start mysql instance.
3) MSI Installer will by default install mysql binaries onto following path i.e. C:\Program Files\MySQL\MySQL Server 7.0
2) Manually create following path ( directory structure onto C: Drive)
/var/lib/mysql-cluster
3) Edit my.ini ( C:\Program Files\MySQL\MySQL Server 7.0/my.ini) and paste following contents
[mysqld]
ndbcluster
ndb-connectstring=192.168.35.22
[mysql_cluster]
ndb-connectstring=192.168.35.22
Steps to setup Data Node onto Centos 5 ( 192.168.35.78)
1) Download RPMs for Centos 5 ( I have used Red Hat Enterprise Linux 5 RPM)
http://dev.mysql.com/downloads/cluster/7.0.html#Red_Hat_Enterprise_Linux_5_RPM_(x86)
2) You should download following RPMs
3) RPM installation
rpm -Uvh MySQL-Cluster-gpl-client-7.0.5-0.rhel5.i386.rpm MySQL-Cluster-gpl-devel-7.0.5-0.rhel5.i386.rpm MySQL-Cluster-gpl-management-7.0.5-0.rhel5.i386.rpm MySQL-Cluster-gpl-server-7.0.5-0.rhel5.i386.rpm MySQL-Cluster-gpl-shared-7.0.5-0.rhel5.i386.rpm MySQL-Cluster-gpl-storage-7.0.5-0.rhel5.i386.rpm MySQL-Cluster-gpl-tools-7.0.5-0.rhel5.i386.rpm
4. Open my.cnf file using your favorite editor e.g. vi /etc/my.cnf and paste following lines
[mysqld]
ndbcluster
ndb-connectstring=192.168.35.22
[mysql_cluster]
ndb-connectstring=192.168.35.22
Steps to install Managment Node
1) Download binaries for Solaris
http://dev.mysql.com/downloads/cluster/7.0.html#Solaris
2) uncompress the file i.e. gzip mysql-cluster-gpl-7.0.5-solaris10-i386.pkg.gz
3) install package i.e. pkgadd -d mysql-cluster-gpl-7.0.5-solaris10-i386.pkg
4) create following directory structure
/var/lib/mysql-cluster
5) setup configure.ini file for managment server and paste following contents into it
e.g.
vi /etc/configure.ini
[NDBD DEFAULT]
NoOfReplicas=2
[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]
# Managment Server
[NDB_MGMD]
HostName=192.168.35.22 # the IP of Managment Server
# Storage Engines
[NDBD]
HostName=192.168.35.78 # the IP of the Windows XP running Data Node
DataDir= /var/lib/mysql-cluster
[NDBD]
HostName=192.168.35.74 # the IP of the Centos 5 running Data Node
DataDir=/var/lib/mysql-cluster
# 2 MySQL Clients
# I personally leave this blank to allow rapid changes of the mysql clients;
# you can enter the hostnames of the above two servers here. I suggest you dont.
[MYSQLD]
[MYSQLD]
Steps to start mysql cluster (192.168.35.22)
Start Management Node
1. Firstly we shall start managment server
ndb_mgmd -f /etc/configure.ini
Start Data Nodes
2. Go to Centos 5 ( 192.168.35.78) and start data node
ndbd
3. Go to Windows XP ( 192.168.35.74) and start data node
locate ndbd.exe file ( C:\Program Files\MySQL\MySQL Server 7.0\bin) and double click ndbd.exe
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @192.168.35.78 (mysql-5.1.32 ndb-7.0.5, Nodegroup: 0, Master)
id=3 @192.168.35.74 (mysql-5.1.32 ndb-7.0.5, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.35.22 (mysql-5.1.32 ndb-7.0.5)
[mysqld(API)] 2 node(s)
id=4 (not connected, accepting connect from any host)
id=5 (not connected, accepting connect from any host)
For this setup I have used three Servers
1. 192.168.35.22 ( Solaris 10, for running Management Node)
2. 192.168.35.78 ( Data Node, Centos 5)
3. 192.168.35.74 ( Data Node, Windows XP 2003)
Steps to install Data Node onto Windows XP ( 192.168.35.74)
1) Get the latest mysql cluster binaries for windows.
http://dev.mysql.com/downloads/cluster/7.0.html#win32
2) Run Windows MSI Installer and follow the steps but don't configure/start mysql instance.
3) MSI Installer will by default install mysql binaries onto following path i.e. C:\Program Files\MySQL\MySQL Server 7.0
2) Manually create following path ( directory structure onto C: Drive)
/var/lib/mysql-cluster
3) Edit my.ini ( C:\Program Files\MySQL\MySQL Server 7.0/my.ini) and paste following contents
[mysqld]
ndbcluster
ndb-connectstring=192.168.35.22
[mysql_cluster]
ndb-connectstring=192.168.35.22
Steps to setup Data Node onto Centos 5 ( 192.168.35.78)
1) Download RPMs for Centos 5 ( I have used Red Hat Enterprise Linux 5 RPM)
http://dev.mysql.com/downloads/cluster/7.0.html#Red_Hat_Enterprise_Linux_5_RPM_(x86)
2) You should download following RPMs
- MySQL-Cluster-gpl-client-7.0.5-0.rhel5.i386.rpm
- MySQL-Cluster-gpl-devel-7.0.5-0.rhel5.i386.rpm
- MySQL-Cluster-gpl-management-7.0.5-0.rhel5.i386.rpm
- MySQL-Cluster-gpl-server-7.0.5-0.rhel5.i386.rpm
- MySQL-Cluster-gpl-shared-7.0.5-0.rhel5.i386.rpm
- MySQL-Cluster-gpl-storage-7.0.5-0.rhel5.i386.rpm
- MySQL-Cluster-gpl-tools-7.0.5-0.rhel5.i386.rpm
3) RPM installation
rpm -Uvh MySQL-Cluster-gpl-client-7.0.5-0.rhel5.i386.rpm MySQL-Cluster-gpl-devel-7.0.5-0.rhel5.i386.rpm MySQL-Cluster-gpl-management-7.0.5-0.rhel5.i386.rpm MySQL-Cluster-gpl-server-7.0.5-0.rhel5.i386.rpm MySQL-Cluster-gpl-shared-7.0.5-0.rhel5.i386.rpm MySQL-Cluster-gpl-storage-7.0.5-0.rhel5.i386.rpm MySQL-Cluster-gpl-tools-7.0.5-0.rhel5.i386.rpm
4. Open my.cnf file using your favorite editor e.g. vi /etc/my.cnf and paste following lines
[mysqld]
ndbcluster
ndb-connectstring=192.168.35.22
[mysql_cluster]
ndb-connectstring=192.168.35.22
Steps to install Managment Node
1) Download binaries for Solaris
http://dev.mysql.com/downloads/cluster/7.0.html#Solaris
2) uncompress the file i.e. gzip mysql-cluster-gpl-7.0.5-solaris10-i386.pkg.gz
3) install package i.e. pkgadd -d mysql-cluster-gpl-7.0.5-solaris10-i386.pkg
4) create following directory structure
/var/lib/mysql-cluster
5) setup configure.ini file for managment server and paste following contents into it
e.g.
vi /etc/configure.ini
[NDBD DEFAULT]
NoOfReplicas=2
[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]
# Managment Server
[NDB_MGMD]
HostName=192.168.35.22 # the IP of Managment Server
# Storage Engines
[NDBD]
HostName=192.168.35.78 # the IP of the Windows XP running Data Node
DataDir= /var/lib/mysql-cluster
[NDBD]
HostName=192.168.35.74 # the IP of the Centos 5 running Data Node
DataDir=/var/lib/mysql-cluster
# 2 MySQL Clients
# I personally leave this blank to allow rapid changes of the mysql clients;
# you can enter the hostnames of the above two servers here. I suggest you dont.
[MYSQLD]
[MYSQLD]
Steps to start mysql cluster (192.168.35.22)
Start Management Node
1. Firstly we shall start managment server
ndb_mgmd -f /etc/configure.ini
Start Data Nodes
2. Go to Centos 5 ( 192.168.35.78) and start data node
ndbd
3. Go to Windows XP ( 192.168.35.74) and start data node
locate ndbd.exe file ( C:\Program Files\MySQL\MySQL Server 7.0\bin) and double click ndbd.exe
Check its working
You can now return to the managment server (192.168.35.22) and enter the managment console:
/opt/mysql/mysql-cluster/bin/ndb_mgm
Enter the command SHOW to see what is going on. A sample output looks like this:
-bash-3.00# ./ndb_mgm-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @192.168.35.78 (mysql-5.1.32 ndb-7.0.5, Nodegroup: 0, Master)
id=3 @192.168.35.74 (mysql-5.1.32 ndb-7.0.5, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.35.22 (mysql-5.1.32 ndb-7.0.5)
[mysqld(API)] 2 node(s)
id=4 (not connected, accepting connect from any host)
id=5 (not connected, accepting connect from any host)
If you see
not connected, accepting connect from 192.168.35.[22/78/74]
Tuesday, 7 April 2009
Centos 5.3 vs Sun Solaris 10- innodb performance
This article compares the performance characteristics of mysql 5.133 operating on Solaris 10 verses Centos 5.3.
The Test result shows that innodb performance on Centos 5.3 is much better than on Sun Solaris 10.
Lab Setup-
The list of hardware used is as follows:
vendor_id : AuthenticAMD
cpu family : 15
model : 107
model name : AMD Athlon(tm) 64 X2 Dual Core Processor 4000+
stepping : 1
cpu MHz : 2100.000
cache size : 512 KB
RAM : 1.5G
The list of software used is as follows:
* CentOS 5.3
* Solaris 5.10
* Mysql 5.1.33
* vmstat
same my.cnf used on both Centos and Solaris.
######################################
[mysqld]
# InnoDB tables
innodb_file_per_table
innodb_buffer_pool_size = 512M
innodb_additional_mem_pool_size = 20M
innodb_log_file_size =128M
######################################
The Server was first loaded with the CentOS operating system and mysql 5.1.33 was installed ( i.e. RPM installtion). Total 23 million rows were inserted into innodb tables using mysqlimport command.
vmstat stats-
The vmstat shows that OS was writing at more than 14MB/s, but no I/O wait is reported,the CPU was at 49% running non-kernel code.
test results on Sun Soloaris 10 continue..
The Test result shows that innodb performance on Centos 5.3 is much better than on Sun Solaris 10.
Lab Setup-
The list of hardware used is as follows:
vendor_id : AuthenticAMD
cpu family : 15
model : 107
model name : AMD Athlon(tm) 64 X2 Dual Core Processor 4000+
stepping : 1
cpu MHz : 2100.000
cache size : 512 KB
RAM : 1.5G
The list of software used is as follows:
* CentOS 5.3
* Solaris 5.10
* Mysql 5.1.33
* vmstat
same my.cnf used on both Centos and Solaris.
######################################
[mysqld]
# InnoDB tables
innodb_file_per_table
innodb_buffer_pool_size = 512M
innodb_additional_mem_pool_size = 20M
innodb_log_file_size =128M
######################################
The Server was first loaded with the CentOS operating system and mysql 5.1.33 was installed ( i.e. RPM installtion). Total 23 million rows were inserted into innodb tables using mysqlimport command.
vmstat stats-
The vmstat shows that OS was writing at more than 14MB/s, but no I/O wait is reported,the CPU was at 49% running non-kernel code.
test results on Sun Soloaris 10 continue..
Wednesday, 1 April 2009
FEDERATED Storage Engine
I have observed that DML run faster on FEDERATED tables compared to SELECT statements.
FEDERATED isn't very clever when it comes to SELECT, it does not, for example, honor LIMIT clauses but instead of doing it on the remote server, does the LIMIT processing locally. There are other limitations to the protocol that FEDERATED uses that makes it sometimes underperform for SELECT queries.
This is not the case with DML operations as they are handled 100% on the remote server
Strangely not all type of select queries take long except where I use count() function.
Actually FEDERATED move all data to the client and then perform the count() in this case, and it would explain the lack of speed.
FEDERATED isn't very clever when it comes to SELECT, it does not, for example, honor LIMIT clauses but instead of doing it on the remote server, does the LIMIT processing locally. There are other limitations to the protocol that FEDERATED uses that makes it sometimes underperform for SELECT queries.
This is not the case with DML operations as they are handled 100% on the remote server
Strangely not all type of select queries take long except where I use count() function.
Actually FEDERATED move all data to the client and then perform the count() in this case, and it would explain the lack of speed.
Mysqldumpslow -How to
Note: In case if you haven't enabled slow log and are using 5.0.x then add following line into my.cnf under [mysqld] and restart mysql server.
####### 5.0.x
log-slow-queries
long_query_time=5 # capture slowest slow queries first
######
# if you have installed 5.1.x then follow following steps
mysql> set global slow_query_log=1;
Query OK, 0 rows affected (0.02 sec)
mysql> set global long_query_time=5;
Query OK, 0 rows affected (0.00 sec)
You need to have perl installed on your system in order to run mysqldumpslow. If you have perl installed but in different location from /usr/local/bin/perl, you can either edit the mysqldumpslow
script to point to the right location of the perl binary or you can create symbolic link /usr/local /bin/perl that points to the real location of the perl binary
e.g.
mysqldumpslow -s c -t 10 /path-to-slowlog/slowlog.log
If the log file is big, mysqldumpslow might put some load on your cpu so I would suggest that you copy the log file on some other box and run mysqldumpslow there, just to avoid cpu load on your server.
In order to parse only 200 last lines of the file, you will have to do something like this:
tail --lines=200 /data/data/eus3300005-slow.log | mysqldumpslow -t20 -
(tail --lines=200 will get only last 200 lines and the last "-" parameter
for mysqldumpslow tell's it to read the log from stdin)
For solaris 10, please use:
tail -200 /data/data/eus3300005-slow.log | mysqldumpslow -t20 -
####### 5.0.x
log-slow-queries
long_query_time=5 # capture slowest slow queries first
######
# if you have installed 5.1.x then follow following steps
mysql> set global slow_query_log=1;
Query OK, 0 rows affected (0.02 sec)
mysql> set global long_query_time=5;
Query OK, 0 rows affected (0.00 sec)
You need to have perl installed on your system in order to run mysqldumpslow. If you have perl installed but in different location from /usr/local/bin/perl, you can either edit the mysqldumpslow
script to point to the right location of the perl binary or you can create symbolic link /usr/local /bin/perl that points to the real location of the perl binary
e.g.
mysqldumpslow -s c -t 10 /path-to-slowlog/slowlog.log
If the log file is big, mysqldumpslow might put some load on your cpu so I would suggest that you copy the log file on some other box and run mysqldumpslow there, just to avoid cpu load on your server.
In order to parse only 200 last lines of the file, you will have to do something like this:
tail --lines=200 /data/data/eus3300005-slow.log | mysqldumpslow -t20 -
(tail --lines=200 will get only last 200 lines and the last "-" parameter
for mysqldumpslow tell's it to read the log from stdin)
For solaris 10, please use:
tail -200 /data/data/eus3300005-slow.log | mysqldumpslow -t20 -
Subscribe to:
Posts (Atom)