1:42pm - frantic instant message comes in from a smallish project that their site is down.
1:42pm - I ssh into both the webserver and the database server. Database server takes awhile to actually get a login prompt.
1:43pm - Email containing frantic instant message with importance set to high ( side note- people still use importance in emails? people still email after they've contacted you another way? people still email? )
1:44pm - I run "top" on both the webserver and the database server. ( There isn't any other monitoring since it's a smallish project. there probably should be. Will have to get client to pay for that.) Webserver looks fine. Database server load is really high, around 5 when it's usually around 0.30. This is a mysql linux server.
1:45pm - exit "top" and open a mysql command line prompt.
1:46pm - execute "show processlist" in mysql. A ton of queries show up, all behind one sinister looking one on a new table that's been set up recently.
1:47pm - execute "explain sinister-looking query" in mysql to see how mysql is executing that query.
1:48pm - possible_keys = NULL
1:49pm - Ruh roh. Execute "alter table add index..."
1:50pm - database working.
1:51pm - database working.
1:52pm - database working.
1:53pm - database done working. execute "show processlist" to see 2 queries left on stack.
1:54pm - Execute "top" command, load is down to 2.7 and falling. 1.9 now. 0.70 now.
1:55pm - Declare crisis averted via instant message and email. Wonder again if people still use email.
1:56pm - Assign developer responsible for non-indexed table the task of making sure the schema changes get into source code control.