sixtydoses. where od is harmless.

September 23, 2008

DB2 migration across different platforms.

Filed under: Tech — Tags: , , — od @ 2:36 am

So, how do you migrate a database from one server to another server across different platforms? There are a few articles related to this topic on the net, and some of them are very good, but I still stumbled on some problems during the migration process. This is not an expert howto article, do note that I am next to clueless when it comes to database. But yea, this is a howto article so that I remember how I did it the last time, and hopefully, it’ll be of help to anyone who come across to this post.

Basically to migrate a db between 2 servers running on different platforms, you’ll need these 2 awesome utility commands:

DB2MOVE

Use db2move to export all tables and data in PC/IXF format.

The db2move command:
db2move <database-name> <action> [<option> <value>]

DB2LOOK

Use db2look command to extract the DDL statements. What are DDL statements? DDL statements are used to build and modify the structure of your tables and other objects in the database.

The db2look command:

db2look -d <database-name> [<-option1> <-option2> … <-optionx>]

STEP-BY-STEP example – based on real scenario, with problems encountered, successfully solved.

Scenario:

Migrating DB2 v8.2 ESE on Linux CentOS 5 to DB2 v9.5 on Windows 2003.

BEFORE MIGRATION STARTS

I have 5 databases located on linux running on DB2 ESE version 8.2. This is my first time doing DB2, my first time seeing the databases, so I did the following before I start migrating:

1) Do a full backup for all databases. This is so very important that I think it’s worth mentioning it another 3 times. In caps. – DO A FULL BACKUP FOR ALL DATABASES. DO A FULL BACKUP FOR ALL DATABASES. DO A FULL BACKUP FOR ALL DATABASES.
2) Record down the number of tables listed in each database.
3) Do a full backup for all databases.

MIGRATE

On Linux:

Export the data with the db2move command (no database connection needed). Run the command in a directory meant for each database as it will create a number of IXF files, depends on how huge your database is.

db2move db1 export
db2move db2 export
db2move db3 export
db2move db4 export
db2move db5 export

Generate the DDL statements with the db2look command (no database connection needed).

db2look -e -a -td @ -l -o db1.sql
db2look -e -a -td @ -l -o db2.sql
db2look -e -a -td @ -l -o db3.sql
db2look -e -a -td @ -l -o db4.sql
db2look -e -a -td @ -l -o db5.sql

I didn’t want to use the default delimeter semicolon (;) because am not sure if there are any stored procedures or functions (am not even sure what those are) on the databases. So just to be on the safe side, I used ‘@’ as the termination character instead.

So far, so good.

FTP the files over to the Windows server.

All of the *.ixf files – transfer them in binary mode.
db2move.lst – transfer them in ascii mode.
*.sql (generated by the db2look command) – transfer them in ascii mode.

On Windows:

I already have a DB2 ESE version 9.5 installed, DAS user and instance created (I prefer the names to match with the db running on linux).

Create all the databases that I want to import in.

db2 create db db1
db2 create db db2
db2 create db db3
db2 create db db4
db2 create db db5

Run the script generated by db2look (no database connection needed).

db2 -td@ -vf db1.sql
db2 -td@ -vf db2.sql
db2 -td@ -vf db3.sql
db2 -td@ -vf db4.sql
db2 -td@ -vf db5.sql

Notice that I specified the -l option while running the db2look command, which means it will generate the DDL statements for user-defined table spaces, database partition groups and buffer pools. Check the sql script and change the location path to match the Windows environment before executing them. Something like:

/home/db2inst1/db2inst1/blah/path3/db2inst1_data.tbs’30000 to C:\db2inst1\blah\path3\db2inst1_data.tbs’30000

Else, you’ll get a ‘Bad container path’ error.

I prefer to pipe the result to a file so that I can review it later. Most of the time I wasn’t able to monitor the output since some of the databases are pretty huge and I worked remotely with a lousy, lousy network connection (I love rdesktop for this).

By this time, my databases contain all the tables as the original databases on linux do. But of course, they’re all empty.

Normally, there shouldn’t be any problems until you come to the data loading part (no database connection needed).

db2move db1 load
db2move db2 load
db2move db3 load
db2move db4 load
db2move db5 load

db2move utility will also create an output file based on the action that you specified (in my case, it’s LOAD.out), so I don’t have to bother piping the result to a file.

If this part ended successfully, you’re all done. Unfortunately for me, there are warnings inside the LOAD.out files. I have 5 LOAD.out files altogether, and 4 of them contain the same warning code:

* LOAD: table “DB2INST1″.”RQVIEWS”
*** WARNING 3107. Check message file tab52.msg!
*** SQL Warning! SQLCODE is 3107
*** SQL3107W There is at least one warning message in the message file.

So what’s in tab52.msg?

SQL3229W The field value in row “1” and column “9” is invalid. The row was
rejected. Reason code: “1”.

SQL3185W The previous error occurred while processing data from row “1” of
the input file.

SQL3229W The field value in row “2” and column “9” is invalid. The row was
rejected. Reason code: “1”.

SQL3185W The previous error occurred while processing data from row “2” of
the input file.

SQL3229W The field value in row “3” and column “9” is invalid. The row was
rejected. Reason code: “1”.

SQL3185W The previous error occurred while processing data from row “3” of
the input file.

SQL3229W The field value in row “4” and column “9” is invalid. The row was
rejected. Reason code: “1”.

Data type mismatch? To be frank, I don’t know, but as I reviewed back the db2move options, there’s one that I have probably missed.

-l lobpaths

LOB stands for Large OBject. A large object (LOB) is a string data type with a size ranging from 0 bytes to 2 GB (GB equals 1 073 741 824 bytes).

So, if you know where your lobs are, specify this option while exporting the data, and make sure to check that you have files with names similar to this when you’re done.

tab52a.001.lob

Being a complete noob in the world of db, I don’t know where the lobs are. In fact, I don’t even know what it means the first time I encountered it (no wonder I purposely ignored the -l option in the first place lol). So, I decided to export the db on linux once again and dump it straight to Windows, on the fly. This way, even without specifying the -l option, it will export your LOBs as well. Nice.

On Windows, I dropped all the databases that I’ve created since I prefer to have a fresh start. Now all I have to do is access the databases on linux remotely from my db2 on Windows.

db2 catalog tcpip node dbonlinux remote 10.8.8.230 server 50000

dbonlinux – an arbitrary name for the node I created.
10.8.8.230 – IP address of the linux(remote) server.
50000 – the iiimsf port used. This is the default port.

db2 catalog db db1 at node dbonlinux
db2 catalog db db2 at node dbonlinux
db2 catalog db db3 at node dbonlinux
db2 catalog db db4 at node dbonlinux
db2 catalog db db5 at node dbonlinux

db2 terminate

Now I can connect to my linux db remotely from the Windows server by using this command:

db2 connect to db1 user db_username using db_password
db2 connect to db2 user db_username using db_password
db2 connect to db3 user db_username using db_password
db2 connect to db4 user db_username using db_password
db2 connect to db5 user db_username using db_password

If you failed to connect, check if you’re using the correct port.

To check which port to be used on the server that you wish to access to:

1) db2 dbm cfg | grep SVCENAME

Most of the time it’ll return the service name instead of the port, so find the port number by the service name from the services file.

Now that am successfully connected, I run again the db2move command.

db2move db1 export

And I did the same with the rest of the 4 databases. This time when I checked, LOBs are exported as well. Coolness.

Remember to disconnect from the database that you’ve accessed remotely. You wouldn’t want to mess with the production database. As for me, I won’t be needing to access the remote database again, so I removed the database alias and the node I’ve created.

db2 uncatalog db db1
db2 uncatalog db db2
db2 uncatalog db db3
db2 uncatalog db db4
db2 uncatalog db db5
db2 uncatalog node dbonlinux

Create all the 5 databases again with db2 create db <database_name> command.

I ran again the sql script generated by db2look, and load the data using db2move command and that’s it, I’m done.

But, am not so lucky. Only 3 out of 5 databases were managed to be exported successfully without any errors. To be honest, am pretty devastated at this point.

Further checking revealed that during the execution of the sql script generated by db2look, the table spaces were not created because of bad container path. I was completely dumbfounded because the container path was good, seriously. Aargghhhhhhhhhhhhhhhhhh! I’ve decided to proceed without the table spaces and create them manually afterwards.

All these while I’ve been doing db2move in load mode. With db2move <db_name> load, you will have to have the tables created on the database first, else, you’ll receive tons of errors. With import, you don’t. So, for the databases that I’ve failed to load the data in, I did import instead. Again, I dropped the databases and recreate them for a clean start.

db2move db1 import
db2move db2 import

Success. Cool.

Now that the tables are all imported, I created the necessary table spaces manually, matched the names listed in the sql script generated by db2look file.

Run the sql script generated by db2look.

I’m DONE!

And that’s what I thought. Bleargh.

Well ok, 95% I’m done, with all the exporting and loading, which is the crucial part anyways.

VERIFYING INTEGRITY

The final part is to check the integrity of the migrated database.

When I first select * from table_name I encountered this error:

SQL0668N Operation not allowed for reason code “1” on table blah.db1. SQLSTATE=57016

More info at https://publib.boulder.ibm.com/infocenter/db2luw/v9r5/index.jsp?topic=/com.ibm.db2.luw.messages.sql.doc/doc/msql00668n.html

Run the following command and all’s good:

db2 set integrity for <table_name> immediate checked

To check the which tables are in pending state, run the following command:

db2 select tabname from syscat.tables where status=’C’

The output is a list of tables that requires the execution of the set integrity statement. It’ll be lovely to have a script or a single command that can set the integrity on the affected tables, rather than doing it one by one for each table.

Yea, I’m DONE 😀

Hope I didn’t miss out anything.
Recommended readings:

Using DB2 utilities to clone databases across different platforms

DB2 Version 8 Connectivity Cheat Sheet

DB2 Backup Basics

DB2 Backup Basics – Part 2

DB2 Backup Basics – Part 3

Advertisements

September 22, 2008

Production server – ain’t no playground.

Filed under: Tech — Tags: , , — od @ 11:10 pm

Screwing up a production server is a nightmare, especially when it involves database. Well actually, it doesn’t matter. As long as it’s a production server, it is a nightmare. It’s just too scary that I had sleepless nights during the weekends that I’ve been drooling over the weekdays.

I am not a database expert, so when my team leader came to me and asked me to migrate the production database from Windows to Linux because the db admin has already resigned, I did fret a bit. So I started doing some quick research on db2, but in the end, one silly mistake I did brought the entire database down. As well as the system that relies on it. As well as myself. I was down.

I thanked myself for doing an offline backup before started with all the migration. Am not gonna nag on why all these while the production server is running without a single backup, why no one is telling me that the server is in used on a daily basis. Yes, it is a production server, but since I was allowed to do the job during office hour, my assumption was it has been put on hold so that no one will be using it during bright daylight. Else, I would’ve considered doing an online backup.

That was an experience that I will never forget. And this brings me to my next post.