PHP/MySQL + Autoindex on deletes
I have noticed that when an autoindex row gets deleted, the autoindex does not "roll back" (i.e., decrement) and continues counting from the last insert...
Over time it makes the index number "jump" around - e.g., 1,2,3,4,7,8,9,14,15,16, etc. which makes for a messy index column - and is particularly confusing to maintain if that same index is a foreign key in another table...
It only happens when deletes happen. I'm just wondering if anyone knows an elegant way to decrement the autoindex when a delete happens so that the index column remains sequential - i.e., 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16, etc.
Thx,
Mij
Busy posted this at 21:11 — 14th March 2005.
He has: 6,151 posts
Joined: May 2001
You can't, the only way is to reinstall the data when you delete something.
delete the item, then pull all the data out into an array and then put it all back again (without the index numbers).
If you are referencing table a with table b, index # then really all of the index # should be removed - depending on your data structure, but either way if you are referencing something that isn't there you wont get any results, if you are getting mysql errors then you need to adjust your code, the cheat way (using @ infront of the query) or the proper way (display a message saying no results or whatever)
mijator posted this at 15:37 — 15th March 2005.
They have: 5 posts
Joined: Mar 2005
I'm not getting any errors - this is a data maintenance issue - i.e., the goal is to prevent the database from getting messy so that later on I can view it quickly without headaches...
I really don't like the idea of having to remove all the data in a table and then rebuild it each time a delete happens....
I've noticed that when I use the phpMyAdmin defrag function, it resets the autoindex no problem...I'd have thought a flush command or something would achieve this...
Surely there's a way to defrag a table after a delete occurs? How does phpMyAdmin do it?
Thx,
Mij
mijator posted this at 16:08 — 15th March 2005.
They have: 5 posts
Joined: Mar 2005
Reading through the MySQL documentation, I found this:
*****
ALTER TABLE tbl_name TYPE=InnoDB
That causes MySQL to rebuild the table. Another way to perform a defragmention operation is to use mysqldump to dump the table to a text file, drop the table, and reload it from the dump file.
*****
I tried the first idea (i.e., Alter table) and it appears to have worked! I can see myself adding that statement after each delete, which should prevent the index/autoindex from becoming fragmented in the first place...
Can anyone think of a reason why I wouldn't want to run the Alter table command each time?
Thx,
Mij
timjpriebe posted this at 17:13 — 15th March 2005.
He has: 2,667 posts
Joined: Dec 2004
I have to admit, I'm not really understanding why it's a problem that the table jumps around. As I understand it, that's the standard way for an auto-incrementing index to work.
What's confusing about it? (Honest question, no sarcasm intended.)
Tim
http://www.tandswebdesign.com
Busy posted this at 20:26 — 15th March 2005.
He has: 6,151 posts
Joined: May 2001
alter table works the same way as I described, wouldn't do to text file though as it could get corrupt.
Only reason not to is security and resources, also if your table is being 'reordered' and someone tries to access the data at the same time will create an error as it wont be there.
You also said and is particularly confusing to maintain if that same index is a foreign key in another table..., if you change the autoindex numbers wont match on other tables, you could change them as well but could end up very messy
mijator posted this at 23:03 — 15th March 2005.
They have: 5 posts
Joined: Mar 2005
Thanks.
It's probably just my nature to try and keep everything in order.
Having the numbers jump around just makes it harder to look at...and so if there is a problem in the code you might not notice it because your brain is expecting to see "random" numbers. I find that by keeping a tight sequential order on indexes (particularly during development phase), I can spot problems immediately - PLUS, everything looks "great" over the lifetime of the data...
I know the database is doing it for a good reason - that's how it ensures that each index is 100% unique (by never ever repeating one). Typical of machines - they do exactly what is required, but not necessarily in an intuitive way that humans relate to...
I think I'll try using the Alter Table after deletes and see if it causes any grief...
Mij
Renegade posted this at 09:01 — 16th March 2005.
He has: 3,022 posts
Joined: Oct 2002
Well, the auto increment of the primary key is ment to be and is unique in the first place, I still don't understand why you'd want to do what you want to do... :S
Want to join the discussion? Create an account or log in if you already have one. Joining is fast, free and painless! We’ll even whisk you back here when you’ve finished.