header-bg.jpg
MySQL查询性能优化 -- group by与order by不是同一个字段时引起Using temporary和Using filesort
发表于 2018-12-06 14:04
|
分类于 MySQL
|
评论次数 0
|
阅读次数 704

QQ图片20190316231127.png

使用group by与order by不是一个字段时会引起MySQL文件排序和创建临时表, SQL如下:

mysql> explain select c.id, group_concat(a.name separator ' · ') as cate from cates as a join vd_cates as b on a.id = b.cate_id left join videos as c on b.video_id = c.id where c.status = 1 group by c.id order by c.addtime desc limit 40 offset 0;

执行结果如下:

+----+-------------+-------+------------+--------+-----------------------------------------------+----------+---------+------------------+-------+----------+----------------------------------------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+--------+-----------------------------------------------+----------+---------+------------------+-------+----------+----------------------------------------------+ | 1 | SIMPLE | c | NULL | index | PRIMARY,idx_event,idx_mixed,idx_view,idx_user | PRIMARY | 4 | NULL | 14711 | 33.33 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | b | NULL | ref | video_id,cate_id | video_id | 4 | hiphop.c.id | 1 | 100.00 | NULL | | 1 | SIMPLE | a | NULL | eq_ref | PRIMARY | PRIMARY | 1 | hiphop.b.cate_id | 1 | 100.00 | NULL | +----+-------------+-------+------------+--------+-----------------------------------------------+----------+---------+------------------+-------+----------+----------------------------------------------+ 3 rows in set (0.06 sec)

addtime字段是有索引的, 但是由于group by的是主键, 所以这条SQL并不会使用索引, 而且扫描行数基本是全表扫描, 还引起了文件排序和创建临时表, 所以这种情况是很有必要去优化一下的

优化方式也很简单, 就是利用子查询, 先将需要使用索引的表查询出来, 之后join到原sql语句中:

mysql> explain select c.id, group_concat(a.name separator ' · ') as cate from cates as a left join vd_cates as b on a.id = b.cate_id join (select id, addtime from videos where status = 1 order by addtime desc limit 40 offset 0) as c on c.id = b.video_id group by c.id order by c.addtime desc;

可以看出我先将video表利用索引进行了查询并根据addtime排序, 之后将结果join到原语句中, 最后再对40行根据addtime排序, 优化后的执行如下:

+----+-------------+------------+------------+--------+------------------+-----------+---------+------------------+------+----------+-----------------------------------------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+------------+------------+--------+------------------+-----------+---------+------------------+------+----------+-----------------------------------------------+ | 1 | PRIMARY | <derived2> | NULL | ALL | NULL | NULL | NULL | NULL | 40 | 100.00 | Using filesort | | 1 | PRIMARY | b | NULL | ref | video_id,cate_id | video_id | 4 | c.id | 1 | 100.00 | Using where | | 1 | PRIMARY | a | NULL | eq_ref | PRIMARY | PRIMARY | 1 | hiphop.b.cate_id | 1 | 100.00 | NULL | | 2 | DERIVED | videos | NULL | index | NULL | idx_mixed | 8 | NULL | 40 | 0.33 | Using where; Backward index scan; Using index | +----+-------------+------------+------------+--------+------------------+-----------+---------+------------------+------+----------+-----------------------------------------------+ 4 rows in set (0.03 sec)

扫面的行数直接从14000多行变成40行, 可以看到最下面的那张表, 也就是video表使用了Backward index scan也就是根据索引倒序扫描

xu服了~

以上是我实际项目中的经验, 本来使用该SQL单表10万记录的接口的响应时间是稳定60ms, 优化后直接变成稳定的30ms, 有遇到相同问题的童鞋可以去试一试, 效果是非常明显的

发布评论
还没有评论,快来抢沙发吧!