Friday, 22 May 2015

MySQL cannot use indexes - common mistake

I recently helped one customer to help resolve performance issue. The problem was caused by a very simple update query that modifies only one record:

EXPLAIN UPDATE PROD_CHECK_MODULES SET DATE_CHECK = NOW() WHERE COUGAR_NAME='cougar-p01' AND ID_MODULE=4 AND NUM_MODULE=03;
+------+-------------+--------------------+-------+---------------+-----------+---------+------+------+-------------+
| id   | select_type | table              | type  | possible_keys | key       | key_len | ref  | rows | Extra       |
+------+-------------+--------------------+-------+---------------+-----------+---------+------+------+-------------+
|    1 | SIMPLE      | PROD_CHECK_MODULES | range | ID_MODULE     | ID_MODULE | 4       | NULL |    6 | Using where |
+------+-------------+--------------------+-------+---------------+-----------+---------+------+------+-------------+
1 row in set (0.00 sec)


 SHOW CREATE TABLE PROD_CHECK_MODULES\G
*************************** 1. row ***************************
....
`COUGAR_NAME` varchar(64) NOT NULL,
  `ID_MODULE` int(11) NOT NULL,
  `NUM_MODULE` char(3) NOT NULL
...
UNIQUE KEY `COUGAR_NAME` (`COUGAR_NAME`,`ID_MODULE`,`NUM_MODULE`),
KEY `ID_MODULE` (`ID_MODULE`)
.....
Ideally it should be using "unique key" to examine/update 1 row. Table in question has got < 200 rows. However, it still does not look too bad "6 rows vs 1 rows" but it blocked large number of similar update queries due to long running transaction.  The first thing done was to identify and kill long running transaction that was blocking other 'update' queries. and then tune 'update' query. 
The issue here is that value 03 cannot be compared to column NUM_MODULE without data type conversion i.e. we are comparing numeric value with string column, that is why MySQL could not use available and most suitable index . We had two solutions to fix this:

a) Modify query to use quotes with a value e.g. 
NUM_MODULE="03"
b) Modify table structure to use type integer 
NUM_MODULE SMALLINT NOT NULL
The easy fix was to modify table structure, because all NUM_MODULE values  are numeric and table is quite small.

Monday, 11 May 2015

Tracking long running processes in MySQL

Tracking long running processes in MySQL is not difficult. A process can be:
  • SQL query
  • Transaction 
There are different ways of tracking both type of processes.. First we would look at some of the available methods to identify long running queries:
  • slow query log
  • [pt-kill]
  • performance schema
  • packet inspection

Slow query log - This is the common method of identifying slow queries. You would need to check if its already enabled:

mysql> SHOW GLOBAL VARIABLES WHERE VARIABLE_NAME LIKE 'slow_query%' OR VARIABLE_NAME LIKE 'long_query_time';
+---------------------+----------------------------------+
| Variable_name       | Value                            |
+---------------------+----------------------------------+
| long_query_time     | 10.000000                        |
| slow_query_log      | OFF                               |
| slow_query_log_file | /data/mysql/log/mariadb-slow.log |
+---------------------+----------------------------------+
3 rows in set (0.01 sec)

In this example, it is not. To enable slow query log, issue following commands: 
mysql> SET GLOBAL slow_query_log=ON;
Query OK, 0 rows affected (0.00 sec)
mysql> set global long_query_time=1.0;
Query OK, 0 rows affected (0.00 sec)
I took the opportunity to reduce long_query_time value from 10 seconds to just 1 second as well. The idea is to capture all queries that take at least 1 second. To make those changes persistent you would have to modify MySQL configuration file (e.g. /etc/my.cnf or my.ini) 
[mysqld]
..
slow_query_log  = ON
long_query_time = 1
Next you would like to summarize slow query log,  you have two famous methods to choose from:
mysqldumpslow - classic method that have been used since mysql is born, on-line docs explain it very well, click here
pt-query-digest - This method has been widely adopted by the community, click here to learn more about it. 

If you are still using older release of MySQL e.g. 5.0 or earlier. It is not possible to enable slow query logging without rebooting MySQLd process. If you cannot restart MySQL process, you can still track slow queries using pt-kill utility:
$ pt-kill --host=localhost -udba --ask-pass --busy-time 1 --print --daemonize --log /var/log/slow-query.log # --busy-time decides queries that have been running for longer than this time
Remember NOT to specify --kill option, as it would start killing your queries

Performance schema - It was introduced in version 5.5, by default it is enabled in 5.6+ release. It provides a way to inspect internal server events (such as IO activity, locks, query profiling etc). It can also be used to track timing on long running processes.  Below are few examples of the commands that can be used to track long running queries, queries that use temporary table and/or the ones that perform full-table scan:
###
## A high level overview of the statements like Query Analysis, sorted by those queries with the highest latency
##
 
SELECT DIGEST, DIGEST_TEXT AS query,
       first_seen,
    last_seen,
       COUNT_STAR AS exec_count,
       SUM_NO_INDEX_USED AS no_index_used_count,
       SUM_NO_GOOD_INDEX_USED AS no_good_index_used_count,
       ROUND((SUM_NO_INDEX_USED / COUNT_STAR) * 100) no_index_used_pct
      FROM performance_schema.events_statements_summary_by_digest
 WHERE SUM_NO_INDEX_USED > 0
    OR SUM_NO_GOOD_INDEX_USED > 0
ORDER BY no_index_used_pct DESC, exec_count DESC LIMIT 5\G 
 
###
# List all normalized statements that use temporary tables ordered by number of on disk temporary tables descending first, then by the number of memory tables.
###
 
SELECT IF(LENGTH(DIGEST_TEXT) > 64, CONCAT(LEFT(DIGEST_TEXT, 30), ' ... ', RIGHT(DIGEST_TEXT, 30)), DIGEST_TEXT) AS query,
       COUNT_STAR AS exec_count,
       SUM_CREATED_TMP_TABLES AS memory_tmp_tables,
       SUM_CREATED_TMP_DISK_TABLES AS disk_tmp_tables,
       ROUND(SUM_CREATED_TMP_TABLES / COUNT_STAR) AS avg_tmp_tables_per_query,
       ROUND((SUM_CREATED_TMP_DISK_TABLES / SUM_CREATED_TMP_TABLES) * 100) AS tmp_tables_to_disk_pct,
       DIGEST AS digest
  FROM performance_schema.events_statements_summary_by_digest
 WHERE SUM_CREATED_TMP_TABLES > 0
ORDER BY SUM_CREATED_TMP_DISK_TABLES DESC, SUM_CREATED_TMP_TABLES DESC LIMIT 5;
 
###
# List all normalized statements that use have done a full table scan ordered by the percentage of times a full scan was done, then by the number of times the statement executed
###
 
SELECT IF(LENGTH(DIGEST_TEXT) > 64, CONCAT(LEFT(DIGEST_TEXT, 30), ' ... ', RIGHT(DIGEST_TEXT, 30)), DIGEST_TEXT) AS query,
       COUNT_STAR AS exec_count,
       SUM_NO_INDEX_USED AS no_index_used_count,
       SUM_NO_GOOD_INDEX_USED AS no_good_index_used_count,
       ROUND((SUM_NO_INDEX_USED / COUNT_STAR) * 100) no_index_used_pct,
       DIGEST AS digest
  FROM performance_schema.events_statements_summary_by_digest
 WHERE SUM_NO_INDEX_USED > 0
    OR SUM_NO_GOOD_INDEX_USED > 0
ORDER BY no_index_used_pct DESC, exec_count DESC LIMIT 5;
Packet inspection - this can be achieved using tcpdump utility, not covered in this article. This is due to its known limitations:
  • you can only observe queries sent over the network.
  • you don’t see queries sent through Unix sockets or via replication. 
  • you also can’t see queries run from stored procedures.
  • security concern, this method requires root access.
 if you are keen to explore this method of tracking slow queries click here and here 

In the next article, I would blog about how to identify long running transactions...

Monday, 2 February 2015

MySQL-5.7.5-m15 - out of the box security improvements

In this article I would understand out of the box security improvement available in MySQL 5.7.5 release (not production ready yet). I am using Centos 6.5 and using RPM packages. 

Installing using RPMs

[root@localhost ~]# rpm -ivh mysql-community-server-5.7.5-0.6.m15.el6.i686.rpm mysql-community-client-5.7.5-0.6.m15.el6.i686.rpm mysql-community-libs-5.7.5-0.6.m15.el6.i686.rpm mysql-community-common-5.7.5-0.6.m15.el6.i686.rpm
Preparing...                ########################################### [100%]
   1:mysql-community-common ########################################### [ 25%]
   2:mysql-community-libs   ########################################### [ 50%]
   3:mysql-community-client ########################################### [ 75%]
   4:mysql-community-server ########################################### [100%]

Starting MySQL Server

[root@localhost ~]# /etc/init.d/mysqld start
Initializing MySQL database:                               [  OK  ]
Starting mysqld:                                           [  OK  ]

Securing the MySQL server deployment.

Connecting to MySQL server using password in '/root/.mysql_secret'

VALIDATE PASSWORD PLUGIN can be used to test passwords
and improve security. It checks the strength of password
and allows the users to set only those passwords which are
secure enough. Would you like to setup VALIDATE PASSWORD plugin?

Press y|Y for Yes, any other key for No:  y

There are three levels of password validation policy:

LOW    Length >= 8
MEDIUM Length >= 8, numeric, mixed case, and special characters
STRONG Length >= 8, numeric, mixed case, special characters and dictionary                  file

Please enter 0 = LOW, 1 = MEDIUM and 2 = STRONG:  2
By default, a MySQL installation has an anonymous user,
allowing anyone to log into MySQL without having to have
a user account created for them. This is intended only for
testing, and to make the installation go a bit smoother.
You should remove them before moving into a production
environment.

Remove anonymous users? (Press y|Y for Yes, any other key for No) :  y
Success.


Normally, root should only be allowed to connect from
'localhost'. This ensures that someone cannot guess at
the root password from the network.

Disallow root login remotely? (Press y|Y for Yes, any other key for No) :  y
Success.

By default, MySQL comes with a database named 'test' that
anyone can access. This is also intended only for testing,
and should be removed before moving into a production
environment.


Remove test database and access to it? (Press y|Y for Yes, any other key for No) :  y
 - Dropping test database...
Success.

 - Removing privileges on test database...
Success.

Reloading the privilege tables will ensure that all changes
made so far will take effect immediately.

Reload privilege tables now? (Press y|Y for Yes, any other key for No) :  y
Success.

All done!

-------------------------------------------------------------------------------------------------------

A very long output but easy to follow, many improvements:

* Password validation plugin is by default loaded and the password validation policy is set to 'STRONG'.
*  Installation removes test database and anonymous user accounts.
*  Installation creates only one 'root' account and automatically generates a password for this account..
* Installation automatically remove permissions on non-existing test% databases.

To  login to the MySQL server, I would need to use random password generated by the installation that is located in /root/.mysql_secret and select a new password using SET PASSWORD() command.


SQL MODE Changes


Strict SQL mode for transactional storage engines (STRICT_TRANS_TABLES) is now enabled by default. An error occurs for invalid or missing values in a data-change statement. The statement is aborted and rolled back. For example, if you wish to insert a value that exceeds the specified column width i.e. VARCHAR(20)

mysql> show create table t\G
*************************** 1. row ***************************
       Table: t
Create Table: CREATE TABLE `t` (
  `a` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `b` varchar(20) DEFAULT NULL,
  PRIMARY KEY (`a`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8
1 row in set (0.08 sec)

mysql> insert into t values (111, repeat('a',21));

ERROR 1406 (22001): Data too long for column 'b' at row 1


Password Expiration Policy

User accounts would automatically expire after 360 days! this is great improvement. As you would want (wish) some of the accounts to definitely expire. At the same time you can disable password expiration for some of the accounts.  You can read MySQL manuals to learn more about this feature here

MySQL OLD Passwords

The mysql_old_password authentication plugin is removed. Accounts that use this plugin are disabled at startup and the server writes an “unknown plugin” message to the error log. For instructions on upgrading accounts that use this plugin, see







Tuesday, 29 October 2013

Using MySQL with Multi-volume DRBD resource


There are many different ways of achieving high availability for MySQL. DRBD (Distributed Replication Block Device) is one of the leading solutions for MySQL HA (High Availability), in this blog I would demonstrate how to configure, install and test MySQL and DRBD.

Quick overview of DRBD

DRBD synchronizes data at the block device (typically a spinning or solid state disk) – transparent to the application, database and even the file system. DRBD requires the use of a journaling file system such as ext3 or ext4. For this solution it acts in an active-standby mode – this means that at any point in time the directories being managed by DRBD are accessible for reads and writes on exactly one of the two hosts and inaccessible (even for reads) on the other. Any changes made on the active host are synchronously replicated to the standby host by DRBD.

Pre-Requisites 

Two servers, each with:

* Redhat/Centos 6.3
* Network connectivity
* MySQL 5.5 or later
* Unpartitioned space on the local disks to create a DRBD partitions, in this example I would use LVMs as DRBD backing device.

For simplicity, servers are named as host1 and host2.
host1: 192.168.0.6
host2: 192.168.0.7

Following configuration files are created:

/etc/hosts (host1)



 /etc/hosts (host2)



Ensure DRBD user-land tools are installed

Firstly use yum to check that they aren’t already there (if “Repo” is set to “installed” then it is already installed):


 Check if kmod-drbd84 is installed. Please ensure that both modules should have same version number:



These packages don't come in the default yum repository, so you may have to download them:
wget http://elrepo.org/linux/elrepo/el6/x86_64/RPMS/kmod-drbd84-8.4.1-1.el6.elrepo.x86_64.rpm
wget http://elrepo.org/linux/elrepo/el6/x86_64/RPMS/drbd84-utils-8.4.1-1.el6.elrepo.x86_64.rpm

And install it if not :
yum install drbd84-utils  kmod-drbd84 -y







The same steps should be performed on host2


Configure DRBD/FileSystem

Confirm that you have a volume group available to spring LVMs


 
In this case I have  Volume Group 'vg0' that has free 3Gb space. I would create two lvms on both hosts:

[root@host1 ~] lvcreate --name data --size 1G vg0
[root@host1 ~] lvcreate --name logs --size 1G vg0

[root@host2 ~] lvcreate --name data --size 1G vg0
[root@host2 ~] lvcreate --name logs --size 1G vg0


Note: If you don't have any Volume Group available but have empty partition available. Click here to learn more about how to create Volume Groups/LVMs.

The new LVMs would be used as a resource, managed (and synchronized between hosts by DRBD); for this reason a new DRDB resource file must be created in the /etc/drbd.d/ directory; the contents should look like this:


The same configuration file must be copied over to the same location on the second host:

scp /etc/drbd.d/r0.res host2:/etc/drbd.d/

At this point, I would disable firewall on host1 and host2 to allow communication on required ports (7789 and 7788)

Next, meta data should be created for the DRBD resource (r0), on each host perform following:

[root@host1 ~]# drbdadm create-md r0

[root@host2 ~]# drbdadm create-md r0

Now we should start DRBD daemon on both hosts (run the command about the same time on each host, because DRBD need to communicate with each other).

[root@host1 ~]# /etc/init.d/drbd start

[root@host2 ~]# /etc/init.d/drbd start

In order to create the file systems (and go on to store useful data in it), one of the hosts must be made
primary for the 'r0' resource:


[root@host1 ~]# drbdadm -- --overwrite-data-of-peer primary all

Now that the devices are available on drbd0/drbd1, so it is possible to create a file system on it
 

[root@host1 ~]#  mkfs -t ext4 /dev/drbd0
[root@host1 ~]#  mkfs -t ext4 /dev/drbd1

Install and configure MySQL
MySQL binaries should be installed on both of the servers.

[root@host1 ~]#  yum install mysql-server mysql -y
[root@host2 ~]#  yum install mysql-server mysql -y

Create following directories on both hosts:

mkdir /var/lib/mysql_data_drbd
mkdir /var/lib/mysql_binlog_drbd

chown -R mysql:mysql /var/lib/mysql_data_drbd
chown -R mysql:mysql /var/lib/mysql_logs_drbd

At this stage we should mount DRBD devices (on primary host1), so that we can initialize MySQL data directory:

[root@host1 ~]# mount /dev/drbd0 /var/lib/mysql_data_drbd
[root@host1 ~]# mount /dev/drbd1 /var/lib/mysql_logs_drbd

[root@host1 ~]# mkdir /var/lib/mysql_data_drbd/data
[root@host1 ~]# mkdir /var/lib/mysql_logs_drbd/logs

Ensure that host1 is currently primary and DRBD backing devices are up.



Edit the /var/lib/mysql_data_drbd/my.cnf file and set datadir=/var/lib/mysql_data_drbd/data and log_bin=/var/lib/mysql_logs_drbdb/logs in the [mysqld]
section. Also ensure that the socket is configured to /var/lib/mysql/mysql.sock and the pid file to
/var/lib/mysql/mysql.pid.

Here is MySQL configuration file on host1:





Create symlink for MySQL options file on both hosts:

[root@host1 ~]#  rm -f /etc/my.cnf
[root@host1 ~]#  cd /etc
[root@host1 ~]#  ln -s /var/lib/mysql_data_drbd/my.cnf my.cnf

[root@host2 ~]#  rm -f /etc/my.cnf
[root@host2 ~]#  cd /etc
[root@host2 ~]#  ln -s /var/lib/mysql_data_drbd/my.cnf my.cnf

Now install default MySQL database files:

[root@host1 ~]# mysql_install_db --no-defaults --datadir=/var/lib/mysql_data_drbdb/data --user=mysql

Start MySQL server
Lets start MySQL server and insert some test data:

[root@host1 ~]# mysqld --defaults-file=/etc/my.cnf &
[root@host1 ~]# mysql -e "CREATE DATABASE IF NOT EXISTS mysqlslap; USE mysqlslap; CREATE TABLE IF NOT EXISTS test (a INT, b VARCHAR(200)); INSERT INTO test VALUES (1, 'mysql drbd')"

Manual failover

It is possible to migrate MySQL/DRBD resource to alternative host, steps to be performed:
On host1 (currently primary):

# Stop mysql instance
[root@host1 ~]#  mysqladmin -uroot -p shutdown
# umount drdb filesystems
[root@host1 ~]#  umount /var/lib/mysql_data_drbd; umount /var/lib/mysql_logs_drbd
# Switch DRBD to SECONDARY mode
[root@host1 ~]#  drbdadm secondry r0

On host2:
# Make this node PRIMARY
[root@host2 ~]#  drbdadm primary r0
# mount drdb filesystems
[root@host1 ~]#  mount /dev/drbd0 /var/lib/mysql_data_drbd
[root@host1 ~]#  mount /dev/drbd1 /var/lib/mysql_logs_drbd
# Start MySQL server
[root@host1 ~]# mysqld --defaults-file=/etc/my.cnf &


Monday, 10 June 2013

Restrictions on Replication with GTIDs

I came across some issues when converting MySQL regular replication to replication with GTIDs.
Keywords used in this document:

MySQL replication - Replication enables data from one MySQL database server (the master) to be replicated to one or more MySQL database servers (the slaves). Replication is asynchronous by default.

GTID - A global transaction identifier (GTID) is a unique identifier created and associated with each transaction when it is committed on the server of origin (master). This identifier is unique not only to the server on which it originated, but is unique across all servers in a given replication setup. There is a 1-to-1 mapping between all transactions and all GTIDs.For more information about GTIDs click here

Assumptions:
MySQL replication with GTIDs is up and running using  --enforce-gtid-consistency

Restrictions on Replication with GTIDs is documented here

1. Temporary tables - CREATE and DROP TEMPORARY tables are not supported inside transaction. e.g.

START TRANSACTION;
Query OK, 0 rows affected (0.00 sec)

CREATE TEMPORARY TABLE t2 like t;
ERROR 1787 (HY000): When ENFORCE_GTID_CONSISTENCY = 1, the statements CREATE TEMPORARY TABLE and DROP TEMPORARY TABLE can be executed in a non-transactional context only, and require that AUTOCOMMIT = 1.

2. Updates involving non-transactional storage engines - It is not possible to mix non-transactional tables (such as MYISAM) with innodb tables within the same transaction.
The same issue can arise if:

  • When master and slave database use different storage engines for their respective versions of the same table. e.g. Table 'A' on master database use Innodb, but the same table on replication slave host use MyISAM storage engine with table 'A'.
  • When MySQL configuration on master/slave hosts use different binary log format e.g. master host is configured to use 'ROW' format and slave host is using 'STATEMENT'

3. CREATE TABLE .... SELECT - It is not supported to perform such type of statements e.g.
create table t2 select * from t;
ERROR 1786 (HY000): CREATE TABLE ... SELECT is forbidden when ENFORCE_GTID_CONSISTENCY = 1.

Conclusion
 As work around CREATE/DROP temporary table statements can be performed outside transaction and CREATE TABLE ... SELECT could be performed using two statements such as CREATE TABLE t1; INSERT INTO t1 SELECT. Similarly, developers would be forced to convert non-transnational tables (e.g. MYISAM) to INNODB. In short, it would be a major code change for the existing application.

Tuesday, 14 May 2013

MySQL slow query example of Index Merge Intersection

My client reported a sudden slow query performance issue, the same query has been running fine for a long time and used to complete in under fraction of a second and now it's taking average 30 seconds to complete.  However, the same query runs fast on development server but slow in production.
MySQL processlist shows that  there are many threads running the same slow query, and the back log continue to grow...

Looking at SQL, it is not complex:

SELECT COUNT(*) 
FROM tblA
WHERE tblA.fkA_ID = 38926722
  AND tblA.fkAType_ID = 1
  AND tblA.fkADetail_ID = 476
  AND tblA.`Date` = '2013-05-10 07:14:41'
  AND tblA.Time = '2013-05-10 07:14:41'

And the EXPLAIN output is:
           id: 1
  select_type: SIMPLE
        table: tblA
         type: index_merge
possible_keys: FK_A_ID,FK_fkAType_ID,FK_fkADetail_ID,IX_Date
          key: FK_ApmAudit2ApmCase,FK_ApmAudit2ApmAuditType
      key_len: 4,1
          ref: NULL
         rows: 1
        Extra: Using intersect(FK_A_ID,FK_fkAType_ID); Using where
1 row in set (0.00 sec)

At first glance, it does not look too bad. MySQL is using two different indexes to search 1 out of 66M rows to return the count of total rows. It is worth mentioning here that column fkAType_ID is not very selective i.e. it holds value 1 or 2. Sadly, 90% of the rows had value 1. This means query is looking at millions of rows before doing index merge with column fkA_ID.

Ok, we know now what's wrong with the query, let's see what options do we have to to optimize this query:?

A) The table hasn't been purged for the past several months; this couldn't be done at this time
B) Pass index hints to the query optimizer to favor index on fkA_ID column.
SELECT COUNT(*) 
FROM tblA USE INDEX (FK_A_ID)
WHERE tblA.fkA_ID = 38926722
  AND tblA.fkAType_ID = 1
  AND tblA.fkADetail_ID = 476
  AND tblA.`Date` = '2013-05-10 07:14:41'
  AND tblA.Time = '2013-05-10 07:14:41'
Query OK, 1 row affected (0.20 sec)

Query completed in a fraction of a second, compared to 30 seconds! Lets look at EXPLAIN output:
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: tblA
         type: ref
possible_keys: FK_A_ID
          key: FK_A_ID
      key_len: 4
          ref: const
         rows: 179
        Extra: Using where
1 row in set (0.00 sec)

Once the backlog consumed, in the next available maintenance window, we purged historic data, removed index hints and added following index to the table:

ALTER TABLE tblA 
DROP INDEX `FK_A_ID`, 
ADD INDEX `FK_A_ID_DATE` (`fkA_ID`,`Date`)
Note: index on column FK_fkAType_ID is needed due to foreign key constraint. The table structure info:
CREATE TABLE `tblA` (
  `pkID` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `fkApplication_ID` smallint(5) unsigned NOT NULL,
  `fkA_ID` int(10) unsigned NOT NULL,
  `fkAType_ID` tinyint(3) unsigned NOT NULL,
  `Date` date NOT NULL,
...
  KEY `FK_Application` (`fkApplication_ID`),
  KEY `FK_fkAType_ID` (`fkAType_ID`),
  KEY `IX_Date` (`Date`),
  KEY `FK_A_ID` (`fkA_ID`),
...
CONSTRAINT `FK_AType` FOREIGN KEY (`fkAType_ID`) REFERENCES `tblAType` (`pkAType_ID`),
...
) ENGINE=InnoDB AUTO_INCREMENT=505970021 DEFAULT CHARSET=utf8

Friday, 12 April 2013

Mongo db stress test using different writeConcern

The aim is to determine how mongo performs using different writeConcern, which control the acknowledgment/quality of the write operations with various options, such as:
 ACKNOWLEDGED [default]
UNACKNOWLEDGED
JOURNALED
JOURNALED_SAFE
SAFE
FSYNCED
FSYNC_SAFE
WriteConern options are documented here: http://docs.mongodb.org/manual/core/write-concern/

Test results:  Total 200k documents are injected into 'user' collection using 4, 8 and 16 parallel threads
 

UNACKNOWLEDGED option is the fastest of all but not safe, on other hand FSYNC_SAFE is the most safe option but is the slowest of all..