I've read the other threads, so know what's causing the problem.
I am moving the database from a ded server (serverA) to a shared server (serverB). I've asked the hosts of serverB to increase the max_allowed_packet size - their response "we don't make custom changes on shared servers". They have suggested that I break the dumpfile into smaller chunks.
My question:
Does this error occur because one record in the dumpfile is bigger than the max allowed? If this is the case, then surely this one record will still be too big - regardless of how many times the dumpfile is chopped?
Are the hosts correct, and I just don't have a clue?
My confusion: I already have another vB-driven site on the shared server - and have no problems with backup/restore. So I'm guessing that the problem occurs because the dedicated server is dumping bigger packet size. Correct?
So if I LOWER the max_allowed_packet size on the ded server, the shared server will be able to read the dumpfile? If so, how low can I set it?
Thanks.
I am moving the database from a ded server (serverA) to a shared server (serverB). I've asked the hosts of serverB to increase the max_allowed_packet size - their response "we don't make custom changes on shared servers". They have suggested that I break the dumpfile into smaller chunks.
My question:
Does this error occur because one record in the dumpfile is bigger than the max allowed? If this is the case, then surely this one record will still be too big - regardless of how many times the dumpfile is chopped?
Are the hosts correct, and I just don't have a clue?
My confusion: I already have another vB-driven site on the shared server - and have no problems with backup/restore. So I'm guessing that the problem occurs because the dedicated server is dumping bigger packet size. Correct?
So if I LOWER the max_allowed_packet size on the ded server, the shared server will be able to read the dumpfile? If so, how low can I set it?
Thanks.
Comment