Fremont beta2 (2011.11.29) has been released

posted Nov 15, 2011, 2:25 AM by

The Fremont beta2, version 2011.11.29, is out and ready to be tested.

In this release:
* continuing refactoring, restructuring, and code quality improvements
* many more documentation improvements
* documentation available at
* fixes to libdrizzle .pc support
* fixes to build scripts
* additional bugs fixed

The Drizzle download file can be found here

Memcached new features...

posted Nov 15, 2011, 1:57 AM by

There is a new set of changes to MemcacheD. This was originally posted by dormondo here.

Memcached 1.4.10 Release Notes

Date: 2011-11-9


Download Link:


This release is focused on thread scalability and performance improvements. This release should be able to feed data back faster than any network card can support as of this writing.


  • Disable  issue 140 's test.
  • Push cache_lock deeper into item_alloc
  • Use item partitioned lock for as much as possible
  • Remove the depth search from item_alloc
  • Move hash calls outside of cache_lock
  • Use spinlocks for main cache lock
  • Remove uncommon branch from asciiprot hot path
  • Allow all tests to run as root

New Features


For more details, read the commit messages from git. Each change was carefully researched to not increase memory requirements and to be safe from deadlocks. Each change was individually tested via mc-crusher ( to ensure benefits.

Tested improvements in speed between 3 and 6 worker threads (-t 3 to -t 6). More than -t 6 reduced speed.

In my tests, set was raised from 300k/s to around 930k/s. Key fetches/sec (multigets) from 1.6 million/s to around 3.7 million/s for a quadcore box. A machine with more cores was able to pull 6 million keys per second. Incr/Decr performance increased similar to set performance. Non-bulk tests were limited by the packet rate of localhost or the network card.

Multiple NUMA nodes reduces performance (but not enough to really matter). If you want the absolute highest speed, as of this release you can run one instance per numa node (where n is your core count):

numactl --cpunodebind=0 memcached -m 4000 -t n

Older versions of memcached are plenty fast for just about all users. This changeset is to allow more flexibility in future feature additions, as well as improve memcached's overall latency on busy systems.

Keep an eye on your hitrate and performance numbers. Please let us know immediately if you experience any regression from these changes. We have tried to be as thorough as possible in testing, but you never know.


The following people contributed to this release since 1.4.9.

Note that this is based on who contributed changes, not how they were done. In many cases, a code snippet on the mailing list or a bug report ended up as a commit with your name on it.

Note that this is just a summary of how many changes each person made which doesn't necessarily reflect how significant each change was. For details on what led up into a branch, either grab the git repo and look at the output of git log 1.4.9..1.4.10 or use a web view.

Drizzle Simple Replication Walkthrough

posted Nov 15, 2011, 1:19 AM by

Replication in Drizzle is very simple and multi-source replication is supported. For a walk through of multi-master (multi-source) replication see David Shrewsbury's excellent post here. Because it was very succinctly here I am quoting a lot of his provisioning a new slave post on replication here. But I have added in some detail on the slave.cfg file for clarity for newbies like me, as well as some more detail on the options and their purpose.

A lot of this can also be found in the documentation but here I’m going to walk through the steps. Also see the slave docs here for any questions you may have.

For our purposes we will walk through the features of setting up basic replication between a master and slave server.

You will need to set up your slave.cfg file before you do anything else. It should be located in the "/usr/local" directory but could also be located anywhere you like. Mine is in the /tmp/slave.cfg.

This is a typical setup.

master-host = "your ip address"
master-port = 4427
master-user = kent
master-pass = samplepassword
io-thread-sleep = 10
applier-thread-sleep = 10

Setting up the master is the next step. An important requirement is to start the master Drizzle database server with the --innodb.replication-log option, and a few other options in most circumstances. More options can be found in the options documentation. These are the most common options needed for a replication master. For example:

The InnoDB replication log must be running:


PID must be set:


the address binding for Drizzle's default port (4427):


The address binding for systems replicating through MySQL's default port (3306):


Data Directory can be set other than default:


For more complex setups, the server id option may be appropriate to use:


To run Drizzle in the background, thereby keeping the database running if the user logs out:


So the start command looks like this on my server:

master> usr/local/sbin/drizzled \
--innodb.replication-log \
--pid-file=/var/run/drizzled/ \
--drizzle-protocol.bind-address= \
--mysql-protocol.bind-address= \

Starting the slave is very similar to starting the master but there are a couple of steps before you are ready to start it up. The following is quoted from David’s blog post on simple replication.

1. Make a backup of the master databases.
2. Record the state of the master transaction log at the point the backup was made.
3. Restore the backup on the new slave machine.
4. Start the new slave and tell it to begin reading the transaction log from the point recorded in #2.

Steps #1 and #2 are covered with the drizzledump client program. If you use the –single-transaction option to drizzledump, it will place a comment near the beginning of the dump output with the InnoDB transaction log metadata. For example:
master> drizzledump --all-databases --single-transaction > master.backup
master> head -1 master.backup

The SYS_REPLICATION_LOG tells the slave where to start reading from. It has two pieces of information:

• COMMIT_ID: This value is the commit sequence number recorded for the most recently executed transaction stored in the transaction log. We can use this value to determine proper commit order within the log. The unique transaction ID cannot be used since that value is assigned when the transaction is started, not when it is committed.
• ID: This is the unique transaction identifier associated with the most recently executed transaction stored in the transaction log.

Now you need to start the server without the slave plugin, then import the backup from the master, then shutdown and restart the server with the slave plugin. This is straight out of the docs:

slave> sbin/drizzled --datadir=$PWD/var &
slave> drizzle < master.backup slave> drizzle –shutdown

Now that the backup is imported, restart the slave with the replication slave plugin enabled and use a new option, –slave.max-commit-id, to force the slave to begin reading the master’s transaction log at the proper location:

You need two options for sure, the add slave plugin and defining the slave.cfg file. So the most basic start command is:

slave> /usr/local/sbin/drizzled \
--plugin-add=slave \

A more typical startup will need more options, My startup looks like this:

slave> /usr/local/sbin/drizzled \
--plugin-add=slave \
-- datadir=$PWD/var \
--slave.config-file=/usr/local/etc//slave.cfg \
--pid-file=/var/run/drizzled/ \
--drizzle-protocol.bind-address= \
--mysql-protocol.bind-address= \
--daemon \
-- slave.max-commit-id=33426

The slave.max-commit-id is found in the dump file that we made from the master and tells the slave where to start reading from.

If you need more info for your particular setup you can view a lot of detail in the sys replication log and the innodb replication log tables that will help you with clarity.

Two tables in the DATA_DICTIONARY schema provide the different views into the transaction log: the SYS_REPLICATION_LOG table and the INNODB_REPLICATION_LOG table.

drizzle> SHOW CREATE TABLE data_dictionary.sys_replication_log\G
*************************** 1. row ***************************
) ENGINE=InnoDB COLLATE = binary

drizzle> SHOW CREATE TABLE data_dictionary.innodb_replication_log\G
*************************** 1. row ***************************
) ENGINE=FunctionEngine COLLATE = utf8_general_ci REPLICATE = FALSE

There you are, you should be up and running with your replication set up.

For more details you can always check the online documentation. And make sure you check out

Drizzle Freemont Beta Released

posted Nov 8, 2011, 2:35 PM by

Fremont has gone BETA! Please test away and let us know if anything is broken.

Summary of changes since the Elliott release:

- Multi-master replication (no conflict resolution)
- UUID's for replication
- JSON interface available
- Percona Innodb patches merged
- Xtrabackup in-tree
- IPV6 data type available
- query login plugin (syslog) is enabled / on by default
- Ability to publish transactions to zeromq
- Improvements to logging stats plugin
- Word on stored procedure interface
- Removal of drizzleadmin utility
- Removal of HailDB engine
- Revamped testing system with suites of randgen, sysbench, sql-bench, and crashme tests
- Continued code refactoring
- Various bug fixes

You can download the latest tarball here!

Gearman 0.25 has been released!

posted Nov 8, 2011, 1:16 PM by Customer Support

 * [2011-11-03] Version 0.25 of the Gearman Server and C library released! You can find it at Launchpad.

  • 1.0 libgearman API extracted.
  • Fix for long function names.
  • Fix for Worker consuming CPU by hanging.
  • TokyoCabinet build fix.
  • Fix for 32bit fix.

Gearman .23 has been released.

posted Jul 1, 2011, 1:36 PM by

Here are some of the fixes and new features in the new Gearman .23 release.

* Defined workers can now return GEARMAN_SHUTDOWN.
   * Benchmark worker can now be told to shutdown.
   * Allocator code has been cleaned up (gearman_allocator_t).
   * Added "workers" option to gearadmin
   * Workers will now default to -1 as timeout (lowers CPU on gearmand server for non-active workers).
   * SO_KEEPALIVE is now enabled on client/worker connections.
   * By default, workers now grab the unique value of the job.

O'Reilly Radar post with Brian Aker about using Memcached

posted Apr 5, 2011, 1:28 PM by Brian Aker

From O'Reilly Radar: "Memcached is one of the technologies that holds the modern Internet together, but do you know what it actually does? Brian Aker has certainly earned the title of Memcached guru, and below he offers a peek under the hood. He'll also provide a deeper dive into Memcached in a tutorial at the upcoming 2011 MySQL Conference."

1-7 of 7