header-bg.jpg
MySQL 查询性能优化,group by 与 order by 不是同一个字段时引起 Using temporary 和 Using filesort
发表于 2018-12-06 14:04
|
分类于 MySQL
|
评论次数 1
|
阅读次数 4233

cover.png

使用 group by 与 order by 不是一个字段时会引起 MySQL 文件排序和创建临时表,SQL 如下:

explain select c.id, group_concat(a.name separator ' · ') as cate from cates as a join vd_cates as b on a.id = b.cate_id left join videos as c on b.video_id = c.id where c.status = 1 group by c.id order by c.addtime desc limit 40 offset 0;

执行结果如下:

+----+-------------+-------+------------+--------+-----------------------------------------------+----------+---------+------------------+-------+----------+----------------------------------------------+
| id | select_type | table | partitions | type   | possible_keys                                 | key      | key_len | ref              | rows  | filtered | Extra                                        |
+----+-------------+-------+------------+--------+-----------------------------------------------+----------+---------+------------------+-------+----------+----------------------------------------------+
|  1 | SIMPLE      | c     | NULL       | index  | PRIMARY,idx_event,idx_mixed,idx_view,idx_user | PRIMARY  | 4       | NULL             | 14711 |    33.33 | Using where; Using temporary; Using filesort |
|  1 | SIMPLE      | b     | NULL       | ref    | video_id,cate_id                              | video_id | 4       | hiphop.c.id      |     1 |   100.00 | NULL                                         |
|  1 | SIMPLE      | a     | NULL       | eq_ref | PRIMARY                                       | PRIMARY  | 1       | hiphop.b.cate_id |     1 |   100.00 | NULL                                         |
+----+-------------+-------+------------+--------+-----------------------------------------------+----------+---------+------------------+-------+----------+----------------------------------------------+
3 rows in set (0.06 sec)

addtime 字段是有索引的, 但是由于 group by 的是主键, 所以这条 SQL 并不会使用索引, 而且扫描行数基本是全表扫描, 还引起了文件排序和创建临时表, 所以这种情况是很有必要去优化一下的

优化方式也很简单, 就是利用子查询, 先将需要使用索引的表查询出来, 之后 join 到原 sql 语句中:

explain select c.id, group_concat(a.name separator ' · ') as cate from cates as a left join vd_cates as b on a.id = b.cate_id join (select id,  addtime from videos where status = 1 order by addtime desc limit 40 offset 0) as c on c.id = b.video_id group by c.id order by c.addtime desc;

可以看出我先将 video 表利用索引进行了查询并根据 addtime 排序, 之后将结果 join到原语句中, 最后再对 40 行根据 addtime 排序,优化后的执行如下:

+----+-------------+------------+------------+--------+------------------+-----------+---------+------------------+------+----------+-----------------------------------------------+
| id | select_type | table      | partitions | type   | possible_keys    | key       | key_len | ref              | rows | filtered | Extra                                         |
+----+-------------+------------+------------+--------+------------------+-----------+---------+------------------+------+----------+-----------------------------------------------+
|  1 | PRIMARY     | <derived2> | NULL       | ALL    | NULL             | NULL      | NULL    | NULL             |   40 |   100.00 | Using filesort                                |
|  1 | PRIMARY     | b          | NULL       | ref    | video_id,cate_id | video_id  | 4       | c.id             |    1 |   100.00 | Using where                                   |
|  1 | PRIMARY     | a          | NULL       | eq_ref | PRIMARY          | PRIMARY   | 1       | hiphop.b.cate_id |    1 |   100.00 | NULL                                          |
|  2 | DERIVED     | videos     | NULL       | index  | NULL             | idx_mixed | 8       | NULL             |   40 |     0.33 | Using where; Backward index scan; Using index |
+----+-------------+------------+------------+--------+------------------+-----------+---------+------------------+------+----------+-----------------------------------------------+
4 rows in set (0.03 sec)

扫面的行数直接从 14000 多行变成 40 行, 可以看到最下面的那张表, 也就是 video 表使用了 Backward index scan 也就是根据索引倒序扫描

xu服了~

以上是我实际项目中的经验, 本来使用该 SQL 单表 10 万 记录的接口的响应时间是稳定 60ms, 优化后直接变成稳定的 30ms,有遇到相同问题的童鞋可以去试一试,效果是非常明显的。

发布评论
评论
共计 1条评论
最新评论
2021-04-26 11:35:46zww[浙江省杭州市网友]
group by 与 order by 不是同一个表的字段时引起 Using temporary 和 Using filesort,该怎么优化
0
0
回复