A few weeks ago I faced a performance issue around a lookup query on a 5GB MySQL database. About 10% of these request take 100x more time than the others. Syntax was really simple:
SELECT `authors`.* FROM `authors` WHERE `authors`.`remote_id` = 415418856 LIMIT 1;
Actually it wasn’t a real “slow query”. Table is quite large (3M+ record) and a response time of 300ms was reasonable. Unfortunately my application run this query thousands times each minute and performances were degraded. The
remote_id is a standard
varchar field indexed by a unique Hash index and I couldn’t understand why these queries take so much time.
After a long search I found a note in the reference of MySQL related to type conversion:
MySQL 5.7 Reference Manual > 12.2 Type Conversion in Expression Evaluation
For comparisons of a string column with a number, MySQL cannot use an index on the column to look up the value quickly. If
str_col is an indexed string column, the index cannot be used when performing the lookup in the following statement:
SELECT * FROM tbl_name WHERE str_col=1;
Everything was clear: my Ruby application doesn’t implicit cast to string the remote_id I use when this value is an integer. This is a quite strange behavior because ActiveRecord, the application ORM, usually operates cast transparently. Usually.
The fixed version of the query runs in 3ms:
SELECT `authors`.* FROM `authors` WHERE `authors`.`remote_id` = '415418856' LIMIT 1;
Small changes, big headaches, great results 🙂
I use to spend much time playing with data in order to import, export and aggregate it. MySQL is one on my primary source of data because stays behind many popular projects and is usually first choice also for custom solutions.
Recently I discovered some really useful unknown functions which help me to export complex data.
This function returns a string result with the concatenated non-
NULL values from a group. It returns
NULL if there are no non-
GROUP_CONCAT(test_score SEPARATOR ',')
GROUP BY student_name;
SELECT * INTO OUTFILE statement is intended primarily to let you very quickly dump a table to a text file on the server machine. It is really useful to export data as CSV directly from your master server. You can also use
DUMPFILE if you need a raw output.
SELECT a,b,a+b INTO OUTFILE '/tmp/result.txt'
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
ESCAPED BY '"'
LINES TERMINATED BY '\n'
If you plan to use it with a standard CSV library you must refer to RFC 4180 for correct format in order to avoid reading errors.
If you database is bigger than you are able to manage you probably need Sqoop. It is is a tool designed for efficiently transferring bulk data between Apache Hadoop and structured datastores such as relational databases.
You can import data from you MySQL database to a CSV file stored on HDFS and access it from anywhere in your cluster.
sqoop import \
--connect jdbc:mysql://mysql.example.com/sqoop \
--username sqoop \
--password sqoop \