sixtydoses. where od is harmless.

March 5, 2010

Either Oracle Smart Update utility interface sucks or I’m an artard.

Filed under: Tech — Tags: , , — od @ 6:43 pm

I just couldn’t figure out how to get Smart Update to work offline. I don’t have the authorization to download all the patches, so I got them from some Oracle support guy, fire up bsu.sh, select work offline, and spend several hours trying to figure out how to point to my patches directory. Bleargh. Fortunately, CLI works like a charm.


od@sysh:/opt/bea103/utils/bsu$ ./bsu.sh -install -patchlist=79YU,CJ4W,ETR7,IQXV,SYCB,T552 -patch_download_dir=/home/od/Desktop/allpatcheszip -prod_dir=/opt/bea103/wlserver_10.3
Checking for conflicts..
No conflict(s) detected

Installing Patch ID: 79YU..
Result: Success

Installing Patch ID: CJ4W.
Result: Success

Installing Patch ID: ETR7.
Result: Success

Installing Patch ID: IQXV.
Result: Success

Installing Patch ID: SYCB.
Result: Success

Installing Patch ID: T552.
Result: Success



On a different note, while I was installing WebLogic Server 10.3.x in silent mode, I was prompted with this error:


od@sysh:~/Desktop$ ./server103_linux32.bin -mode=silent -silent_xml=silent_103.xml
Extracting 0%……………………………………………………………………………………….100%
The local BEA product registry is corrupted. Please select another BEA Home or contact BEA Support
** Error during execution, error code = 65280.



Found a forum that says, for version above 9.x, ‘COMPONENT_PATHS’ doesn’t accept value like this –> “WebLogic Server/Core Application Server” anymore. But actually it does accept “WebLogic Server/<insert_component>” format, and even the documentation says so.

So anyway, the error was due to my habit of copying and pasting in vi which sort of corrupted my component paths line. So yea, if you get that kind of error, I’d say chance is your silent.xml file format is incorrect.

These are my silent.xml files.



This will install  all WebLogic Server components.

###################################################

<?xml version=”1.0″ encoding=”UTF-8″?>
<!– Silent installer option: -mode=silent -silent_xml=C:\bea\silent.xml –>

<bea-installer>
<input-fields>
<data-value name=”BEAHOME” value=”/opt/bea1032″ />
<data-value name=”WLS_INSTALL_DIR” value=”/opt/bea1032/wlserver_10.3″ />
<data-value name=”COMPONENT_PATHS” value=”WebLogic Server” />
<data-value name=”INSTALL_NODE_MANAGER_SERVICE” value=”yes” />
<data-value name=”NODEMGR_PORT” value=”5559″ />
<data-value name=”INSTALL_SHORTCUT_IN_ALL_USERS_FOLDER” value=”yes”/>

</input-fields>
</bea-installer>

###################################################



This will install all WebLogic Server components, without the samples.

###################################################

<?xml version=”1.0″ encoding=”UTF-8″?>
<!– Silent installer option: -mode=silent -silent_xml=C:\bea\silent.xml –>

<bea-installer>
<input-fields>
<data-value name=”BEAHOME” value=”D:\bea\wls103_silent” />
<data-value name=”WLS_INSTALL_DIR” value=”D:\bea\wls103_silent\wlserver_10.3″ />
<data-value name=”WLW_INSTALL_DIR” value=”D:\bea\wls103_silent\workshop_10.3″ />
<data-value name=”COMPONENT_PATHS” value=”WebLogic Server/Core Application Server|WebLogic Server/Administration Console|WebLogic Server/Configuration Wizard and Upgrade Framework|WebLogic Server/Web 2.0 HTTP Pub-Sub Server|WebLogic Server/WebLogic JDBC Drivers|WebLogic Server/Third Party JDBC Drivers|WebLogic Server/WebLogic Server Clients|WebLogic Server/WebLogic Web Server Plugins|WebLogic Server/UDDI and Xquery Support|WebLogic Server/Server Examples|Workshop/Workshop for WebLogic|Workshop/Workshop Runtime Framework” />
<data-value name=”USE_EXTERNAL_ECLIPSE” value=”false” />
<data-value name=”EXTERNAL_ECLIPSE_DIR” value=”D:\eclipse332\eclipse” />
<data-value name=”INSTALL_NODE_MANAGER_SERVICE” value=”yes” />
<data-value name=”NODEMGR_PORT” value=”5559″ />
<data-value name=”INSTALL_SHORTCUT_IN_ALL_USERS_FOLDER” value=”yes”/>

</input-fields>
</bea-installer>

###################################################


What caused the error was, I simply copied the component paths from my terminal, and paste it using the vi editor, so I kinda missed the fact that the line was not continuous, creating breaks of white space. Something like this:

<data-value name=”COMPONENT_PATHS” value=”WebLogic Server/Core Application Server|WebLogic Server/Administration Console|WebLogic Server/Configuration Wizard and Upgrade Framework|WebLogic Server/Web 2.0 <break of white space>
HTTP Pub-Sub Server WebLogic Server/WebLogic JDBC Drivers|WebLogic Server/Third Party JDBC <break of white space>
Drivers|WebLogic Server/WebLogic Server Clients|WebLogic Server/WebLogic Web Server Plugins|WebLogic Server/UDDI and Xquery Support|WebLogic Server/Server Examples|Workshop/Workshop for <break of white space>
WebLogic|Workshop/Workshop Runtime Framework” />

January 22, 2010

Autostart tomcat upon reboot.

Filed under: Tech — Tags: , , , , — od @ 1:04 pm

So this morning they shutdown the server and called me up complaining that the website is down.

No, I didn’t ask how many times have they rebooted. Lol.

Anyway the website is down because I didn’t configure both apache and tomcat to run automatically upon reboot. Am so lazy today because it’s Friday, basically it’s a yippee day,  a day that is legal for you to come to work late, and go back early.

Googled for the auto script, but none satisfied my needs, so, here’s mine (adapted from a couple of scripts), because sharing is caring.

This script will always run tomcat as user ‘admin’ (EUID 500). If you run the script as a different user, it’ll prompt for admin’s password. Dump the script in /etc/init.d/ and run chkconfig to configure runlevel startup.





#!/bin/bash
#
# tomcat     This is the init.d script used to start tomcat.
#                It calls $CATALINA_HOME/bin/startup.sh or shutdown.sh
# chkconfig: – 91 15
# description: Apache Tomcat is an open source software implementation of the Java Servlet and JavaServer Pages technologies.
# processname: tomcat

export JAVA_HOME=/usr/java/jdk1.6.0_16
export CATALINA_HOME=/usr/local/apache-tomcat-5.5.28

tomcat_stop() {
if [[ $EUID -ne 500 ]]; then
su -c ‘$CATALINA_HOME/bin/shutdown.sh’ admin
exit 1
else
$CATALINA_HOME/bin/shutdown.sh
fi
}

tomcat_start() {
if [[ $EUID -ne 500 ]]; then
su -c ‘$CATALINA_HOME/bin/startup.sh’ admin
exit 1
else
$CATALINA_HOME/bin/startup.sh
fi
}

case $1 in
start)
echo -n “Starting Tomcat server:”
tomcat_start
echo “.”
;;
stop)
echo -n “Stopping Tomcat server:”
tomcat_stop
echo “.”
;;
*)
echo “Usage: /etc/init.d/tomcat start|stop”
;;
esac

December 2, 2009

StatsView on Ubuntu Jaunty.

Filed under: Tech — Tags: , , , — od @ 1:37 am

Quoted from StatsView README:

#####################################################

PREREQUISITES
————-

StatsView is written in Perl5, using the Perl Tk extension library. I recommend
that you use perl5.005_03 or later, and Tk800.014 or later, as StatsView has
been tested with these versions. You will also need the Tk::GBARR add-on
package for this version.

The graphing is done with gnuplot, and version 3.7 or later is required –
a copy can be found in the gnuplot_src subdirectory.

#####################################################

I already have Perl5 and gnuplot in my system, so I only need to install Perl Tk, Tk-GBARR and StatsView.

Install Perl Tk:

#apt-get install perl-tk


Grab Tk-GBARR from http://www.cpan.org/authors/id/SREZIC/. At this time of writing the latest version of Tk-GBARR is 2.08:

#fetch http://www.cpan.org/authors/id/SREZIC/Tk-GBARR-2.08.tar.gz
#tar xvfz Tk-GBARR-2.08.tar.gz
#cd Tk-GBARR-2.08
#perl Makefile.PL
#make
#make test
#make install


Grab StatsView from http://www.cpan.org/authors/id/ABURLISON/.

#fetch http://www.cpan.org/authors/id/ABURLISON/StatsView-1.4.tar.gz
#tar xvfz StatsView-1.4.tar.gz
#cd StatsView-1.4
#perl Makefile.PL
#make install
#./scripts/sv


Test StatsView using the example included:

#cd /path/StatsView-1.4/examples
#gzip -d sar.txt.gz
#../scripts/sv sar.txt


It worked wonderfully using the sar output sample.. but I couldn’t get it to display any graph using my collection of sar output, both in binary and text formats. At this point am still not sure why it kept on complaining that my output file is invalid.

They are valid AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAa!!! 😯

Now what am I supposed to tell my boss?

“Hi boss.. remember the Linux server performance data that I promised last week? I don’t have it.”
“How come?”
“I have it, but I don’t have it in a pwetty graph format like I pwomised.”
“Well it’s okay you can pass me the raw data.”
“It’s in binary.”
“What?”
“Uh.. you know there are 10 types of people in the world: Those who understand binary, and those who don’t…”
“You don’t have anything to present, do you?”
*GULP*

Source: http://jerkharris.com/books/books/PerlOrDBA/oracleperl-CHP-3-SECT-3.html

May 23, 2009

Edge Load Balancer Network Dispatcher – Double Collocated HA on HP-UX.

Filed under: Tech — Tags: , , , , — od @ 5:01 am

One of my recent project was to configure Edge load balancer on 2 servers in high availability (HA) environment. I rarely do Edge, but the configuration is pretty straightforward. In my past projects, Edge implementation has always been in separate boxes, which is easier compared to collocated setup. In this post I’m going to share my configuration for edge dispatcher (MAC forwarding) that resides together with web server (I’m using IHS) and WebSphere. Each server will use 1 IP address for both web server and dispatcher. The configuration is almost the same, but there were few issues that I encountered and I hope this post will be of help to those who are dealing with Edge dispatcher as well.

For typical setup of Edge load balancer servers that do not reside in the same box with web servers, the general rules are:
– Primary Edge – cluster IP aliased to its NIC.
– Standby Edge – cluster IP aliased to its loopback.
– Web Servers – cluster IP aliased to loopback.

These rules hold the same in collocated environment:
– Primary Edge – cluster IP aliased to its NIC.
– Standby Edge – cluster IP aliased to its loopback.

Collocated Edge.

Double collocated HA edge.

Say I have the following:
Cluster IP – 192.168.10.10
Cluster port – 8080
Primary Edge – 192.168.10.20
Backup Edge – 192.168.10.21


default.cfg for Primary Edge:

dscontrol set loglevel 5
dscontrol set logsize 50000000
dscontrol executor start

dscontrol executor set nfa 192.168.10.20

dscontrol highavailability heartbeat add 192.168.10.20 192.168.10.21
dscontrol highavailability backup add primary auto 8880
dscontrol highavailability reach add 192.168.10.55
dscontrol highavailability reach add 192.168.10.56

dscontrol cluster add 192.168.10.10
dscontrol port add 192.168.10.10:8080

dscontrol server add 192.168.10.10:8080:192.168.10.20
dscontrol server add 192.168.10.10:8080:192.168.10.21

dscontrol manager start manager.log 10004
dscontrol man reach set loglevel 5
dscontrol man reach set logsize 50000000
dscontrol advisor start Http 192.168.10.10:8080 Http_192.168.10.10_8080.log



default.cfg for Standby Edge:

dscontrol set loglevel 5
dscontrol set logsize 50000000

dscontrol executor start

dscontrol executor set nfa 192.168.10.21

dscontrol highavailability heartbeat add 192.168.10.21 192.168.10.20
dscontrol highavailability backup add backup auto 8880
dscontrol highavailability reach add 192.168.10.55
dscontrol highavailability reach add 192.168.10.56

dscontrol cluster add 192.168.10.10
dscontrol port add 192.168.10.10:8080

dscontrol server add 192.168.10.10:8080:192.168.10.21
dscontrol server add 192.168.10.10:8080:192.168.10.20

dscontrol manager start manager.log 10004
dscontrol man reach set loglevel 5
dscontrol man reach set logsize 50000000
dscontrol advisor start Http 192.168.10.10:8080 Http_192.168.10.10_8080.log



goActive script:

This script will remove the cluster IP from loopback and alias it to the NIC.

#!/bin/ksh

CLUSTER=192.168.10.10
LOOPBACK=lo0:1

ifconfig $LOOPBACK 0.0.0.0
dscontrol executor configure $CLUSTER



goStandby script:
This script will remove the cluster IP from NIC and alias it to the loopback.

#!/bin/ksh

LOOPBACK=lo0:1
CLUSTER=192.168.10.10
NETMASK=255.255.255.192

dscontrol executor unconfigure $CLUSTER
ifconfig $LOOPBACK $CLUSTER netmask $NETMASK up



goInOp script:
This script will remove the cluster IP from all devices (loopback and NIC).

#!/bin/ksh

CLUSTER=192.168.10.10
NETMASK=255.255.255.192

dscontrol executor unconfigure $CLUSTER
ifconfig $LOOPBACK $CLUSTER netmask $NETMASK down



The normal method to test if the high availability works smoothly is by plugging out the network cable off the edge server. I would tail the root mail (/var/mail/root) at the same time, so I could see which HA script has been triggered when the network is interrupted. Another method is to bring down the server, by rebooting it or shutting it down. With reboot you’ll only have a short time span to monitor the failover in action, but of course this depends on how long your servers take to start up.

But since this is a collocated environment, if I were to opt for either the described testing methods, I wouldn’t be able to see if the dispatcher balances all requests to both web servers accordingly (in my case I’m using the round robin algorithm). So what I did is, I manually stop the executor so that failover occurs. Note that stopping the dsserver alone won’t trigger the HA scripts. Actually it is not necessary to stop the dsserver. Well to be honest even if it’s not a collocated environment, I normally test the HA failover by stopping the executor, since normally am working remotely and plugging out the cable requires me to get the help of the sys admins. So might as well test if its really working before going through all the hassle.

One of the problem that I encountered was instability. Sometimes the dispatcher will run in the right mode (active | standby), but most of the time both will run as active. It was very unstable, no certain pattern that I could track. Even worse, sometimes when I tried ro run the dispatcher as a standalone lb, all of the incoming requests will be routed directly to the web server, skipping the dispatcher completely. I was stuck with this problem for several days when I finally figured out what the culprit is.

The ibmlb module.

Everytime when the executor is stopped, the ibmlb module will be unloaded. Everytime when the executor starts, the ibmlb module will be loaded to the kernel. I’m lucky that I have dmesg on both servers, so based from dmesg, this is how it should looked like whenever you stop and start the executor:

ibmlb DLKM successfully unloaded
ibmlb DLKM successfully loaded

But what happened was, when I stopped the executor, the ibmlb was not unloaded. The status was busy, and I’ll have to unload the module explicitly.

ibmlb DLKM successfully unloaded
ibmlb DLKM successfully loaded
ibmlb version is 06.01.00.00 – 20060515-232359 [wsbld265]
WARNING: moduload : module is busy, module id = 14, name = ibmlb
WARNING: moduload : module is busy, module id = 14, name = ibmlb
WARNING: moduload : module is busy, module id = 14, name = ibmlb
WARNING: moduload : module is busy, module id = 14, name = ibmlb
WARNING: moduload : module is busy, module id = 14, name = ibmlb

I’ve not seen anything like this before (I used to configure dispatcher on AIX servers). Consider the following test cases (arp table checked from a different server that resides on the same segment):

TEST 1.

1) Primary active, Backup standby. Cluster IP belongs to Primary.
2) Primary down, Backup goes active. Module ibmlb is UNLOADED successfully on Primary. Cluster IP belongs to Backup.
3) Primary up in active mode, Backup goes standby. Cluster IP belongs to Primary.

TEST 2.
1) Primary active, Backup standby. Cluster IP belongs to Primary.
2) Primary down, Backup goes active. Module ibmlb is busy and still LOADED on Primary. Cluster IP belongs to Backup.
3) Primary up in active mode, Backup stays active. Cluster IP belongs to Primary, but all requests will skip dispatcher and go straight to the web server.


TEST 3.

1) Primary active, Backup standby. Cluster IP belongs to Primary.
2) Primary down, Backup goes active. Module ibmlb is UNLOADED successfully on Primary. Cluster IP belongs to Backup.
3) Primary up in active mode, Backup goes standby. Cluster IP belongs to Primary.
4) Backup down. Module ibmlb is UNLOADED successfully.
5) Backup up, running in standby mode.
6) Backup down. Module ibmlb is busy and still LOADED on backup.
7) Backup up, running in active mode (remember that Primary is also in active mode too). Cluster IP belongs to Backup, but all requests will skip the dispatcher and go straight to the web server.
8 ) Backup down. Module ibmlb is busy and still LOADED on backup. Explicitly unload the module using kcmodule command until it gets UNLOADED. Cluster IP belongs to Primary.
9) Backup up, running in standby mode.

Most of the time I won’t be able to unload it right away, until I let the server ‘rest’ for about 15 – 20 minutes, before trying to unload it again. Rebooting the server will always solve this problem (the module next state is unused). Am not sure if there’s a way to force a module to be unloaded though. As far as I know there’s no force flag for kcmodule.

I was fooled several times since I tested the splash page of the web servers from my Opera browser. I was on a different subnet, so I guess there must be a switch/router in between me and the edge servers. At times, even when the cluster IP is aliased to the Primary Edge, my browser will point to the Backup Edge since the ARP cache was not refreshed. It was so annoying since this will affect the cluster report. The rest of the testings were done by running a browser from a different server but belongs to the same subnet. At least I could clear up the ARP cache manually if I have to.

Okay probably this is my browser problem, but testing the splash page with Firefox sucks. It kept on hitting the splash page even after I’ve stopped both web servers, and cleared up the cache. It was alright with Opera though. What gives?

By the way I’m using Edge v6.1. If you check out the Edge Fixpack page here, you’ll notice that there is no patch for HP-UX. Not a single patch. Is IBM trying to say something? Don’t use Edge on HP-UX, perhaps? Lol. Anyway, IBM packed me a patch (6.1.0.35), but still it didn’t address the module issue. Am not sure if I could call it a patch though, it’s more like an installer since I had to reinstall everything.

Thanks to Robert Brown from IBM for assisting me on this ‘false alarm’ panic attack (initially I thought it was a network issue).

December 16, 2008

WebLogic 10.3 on FreeBSD?

Filed under: Tech — Tags: , , , — od @ 11:57 pm

I wish.. lol.

Am very new to weblogic, so am pretty excited to try it on my FreeBSD at home. Well ok, I just managed to install it, but couldn’t get it run. Bleargh. The problem is with the LD_LIBRARY_PATH I think. There’s no native directory under <WL_HOME>/server, so it’ll complain about the missing path when you try to start the domain or node manager. Pfftttt. I’d love to know how to fix this 😦

But anyways, here’s how to install WebLogic 10g on FreeBSD 7. Just installing it, but it doesn’t work lol. I don’t know why I even bother writing this.

1 – FIrst of all, download the installer from the oracle website. Choose HP-UX as the operating system as that’ll provide you with a generic jar installer.
http://www.oracle.com/technology/software/products/ias/htdocs/wls_main.html

2 – While it’s downloading, install eclipse from the port.
cd /usr/ports/java/eclipse && make install clean

3 – Install eclipse WTP. Get it from here:
http://www.eclipse.org/downloads/download.php?file=/webtools/downloads/drops/R2.0/R-2.0.3-20080710044639/wtp-R-2.0.3-20080710044639.zip

Place the zip file at /usr/local and unzip it. It’ll place all the extracted files in the right directory.

4 – After you’re done with eclipse, you’re ready to install weblogic. Am performing this as a non root user.

I use diablo java, so running java -jar server103_generic.jar alone will not work.

My java -version:
java version “1.5.0”
Java(TM) 2 Runtime Environment, Standard Edition (build diablo-1.5.0-b01)
Java HotSpot(TM) 64-Bit Server VM (build diablo-1.5.0_07-b01, mixed mode)

So execute the installer using sun jdk directory to get it running:
/usr/local/jdk1.6.0/bin/java -Dos.name=unix -jar server103_generic.jar

You’ll have to specify the Dos unix name, else you’ll get the insufficient disk space error, which is very annoying when you actually have tons of free space. You may get lucky with this and the installation will end successfully. As for me, it stuck at 74% while creating the sample domain.

$ /usr/local/jdk1.6.0/bin/java -Dos.name=unix -jar server103_generic.jar
Extracting 0%……………………………………………………………………………………….100%
Exception in thread “Thread-14” java.lang.OutOfMemoryError: Java heap space
at java.io.BufferedOutputStream.<init>(BufferedOutputStream.java:59)
at java.io.BufferedOutputStream.<init>(BufferedOutputStream.java:42)
at com.bea.plateng.common.util.JarHelper.extract(JarHelper.java:790)
at com.bea.plateng.common.util.JarHelper.extract(JarHelper.java:676)
at com.bea.plateng.common.util.JarHelper.extract(JarHelper.java:634)
at com.bea.plateng.domain.TemplateImporter.generate(TemplateImporter.java:237)
at com.bea.plateng.domain.script.ScriptExecutor$2.run(ScriptExecutor.java:2785)

The error was out of memory exception, so I decided to reinstall it. But first I have to uninstall it first since weblogic detected that it has already been installed at the target directory.I run the installer with the following command:
/usr/local/jdk1.6.0/bin/java -Xmx2G -Dos.name=unix -jar server103_generic.jar

Installation complete!

Alright, now the installation is done, you might wanna try to start the sample domain.

Ahah! Now this is the part where I got unlucky.

It complains that the port is being used, when it’s not! Hmmmmphhhhh!

<Dec 13, 2008 12:25:25 PM MYT> <Notice> <Security> <BEA-090169> <Loading trusted certificates from the jks keystore file /usr/local/jdk1.6.0/jre/lib/security/cacerts.>
<Dec 13, 2008 12:25:29 PM MYT> <Error> <Server> <BEA-002606> <Unable to create a server socket for listening on channel “MedRec Local Network Channel”. The address 127.0.0.1 might be incorrect or another process is using port 7011: java.net.BindException: Can’t assign requested address.>
<Dec 13, 2008 12:25:29 PM MYT> <Error> <Server> <BEA-002606> <Unable to create a server socket for listening on channel “Default”. The address 192.168.0.1 might be incorrect or another process is using port 7011: java.net.BindException: Can’t assign requested address.>
<Dec 13, 2008 12:25:29 PM MYT> <Error> <Server> <BEA-002606> <Unable to create a server socket for listening on channel “DefaultSecure”. The address 192.168.0.1 might be incorrect or another process is using port 7012: java.net.BindException: Can’t assign requested address.>
<Dec 13, 2008 12:25:29 PM MYT> <Emergency> <Security> <BEA-090087> <Server failed to bind to the configured Admin port. The port may already be used by another process.>
<Dec 13, 2008 12:25:29 PM MYT> <Critical> <WebLogicServer> <BEA-000362> <Server failed. Reason: Server failed to bind to any usable port. See preceeding log message for details.>
<Dec 13, 2008 12:25:29 PM MYT> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to FAILED>
<Dec 13, 2008 12:25:29 PM MYT> <Error> <WebLogicServer> <BEA-000383> <A critical service failed. The server will shut itself down>
<Dec 13, 2008 12:25:29 PM MYT> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to FORCE_SHUTTING_DOWN>
Stopping PointBase server…
PointBase server stopped.

Well actually there are other errors came out before that but am too lazy to look at it. Lolz.

Now if you try to create a domain, you’ll get the shared library path error:

./config.sh: Don’t know how to set the shared library path for FreeBSD.
Exception in thread “AWT-EventQueue-0” java.lang.NullPointerException
at java.awt.Container.createHierarchyEvents(Container.java:1366)
at java.awt.Container.createHierarchyEvents(Container.java:1366)
at java.awt.Container.createHierarchyEvents(Container.java:1366)
at java.awt.Container.createHierarchyEvents(Container.java:1366)
at java.awt.Container.addImpl(Container.java:1082)
at java.awt.Container.add(Container.java:903)
at com.bea.plateng.wizard.GUIContext$8.run(GUIContext.java:480)
at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:209)
at java.awt.EventQueue.dispatchEvent(EventQueue.java:597)
at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:273)
at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:183)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:173)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:168)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:160)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:121)

Same goes if you try to start a node manager.

Ah well.. think I’ll spare some other time playing around with this. Doing weblogic makes me miss websphere. Lol.

September 23, 2008

DB2 migration across different platforms.

Filed under: Tech — Tags: , , — od @ 2:36 am

So, how do you migrate a database from one server to another server across different platforms? There are a few articles related to this topic on the net, and some of them are very good, but I still stumbled on some problems during the migration process. This is not an expert howto article, do note that I am next to clueless when it comes to database. But yea, this is a howto article so that I remember how I did it the last time, and hopefully, it’ll be of help to anyone who come across to this post.

Basically to migrate a db between 2 servers running on different platforms, you’ll need these 2 awesome utility commands:

DB2MOVE

Use db2move to export all tables and data in PC/IXF format.

The db2move command:
db2move <database-name> <action> [<option> <value>]

DB2LOOK

Use db2look command to extract the DDL statements. What are DDL statements? DDL statements are used to build and modify the structure of your tables and other objects in the database.

The db2look command:

db2look -d <database-name> [<-option1> <-option2> … <-optionx>]

STEP-BY-STEP example – based on real scenario, with problems encountered, successfully solved.

Scenario:

Migrating DB2 v8.2 ESE on Linux CentOS 5 to DB2 v9.5 on Windows 2003.

BEFORE MIGRATION STARTS

I have 5 databases located on linux running on DB2 ESE version 8.2. This is my first time doing DB2, my first time seeing the databases, so I did the following before I start migrating:

1) Do a full backup for all databases. This is so very important that I think it’s worth mentioning it another 3 times. In caps. – DO A FULL BACKUP FOR ALL DATABASES. DO A FULL BACKUP FOR ALL DATABASES. DO A FULL BACKUP FOR ALL DATABASES.
2) Record down the number of tables listed in each database.
3) Do a full backup for all databases.

MIGRATE

On Linux:

Export the data with the db2move command (no database connection needed). Run the command in a directory meant for each database as it will create a number of IXF files, depends on how huge your database is.

db2move db1 export
db2move db2 export
db2move db3 export
db2move db4 export
db2move db5 export

Generate the DDL statements with the db2look command (no database connection needed).

db2look -e -a -td @ -l -o db1.sql
db2look -e -a -td @ -l -o db2.sql
db2look -e -a -td @ -l -o db3.sql
db2look -e -a -td @ -l -o db4.sql
db2look -e -a -td @ -l -o db5.sql

I didn’t want to use the default delimeter semicolon (;) because am not sure if there are any stored procedures or functions (am not even sure what those are) on the databases. So just to be on the safe side, I used ‘@’ as the termination character instead.

So far, so good.

FTP the files over to the Windows server.

All of the *.ixf files – transfer them in binary mode.
db2move.lst – transfer them in ascii mode.
*.sql (generated by the db2look command) – transfer them in ascii mode.

On Windows:

I already have a DB2 ESE version 9.5 installed, DAS user and instance created (I prefer the names to match with the db running on linux).

Create all the databases that I want to import in.

db2 create db db1
db2 create db db2
db2 create db db3
db2 create db db4
db2 create db db5

Run the script generated by db2look (no database connection needed).

db2 -td@ -vf db1.sql
db2 -td@ -vf db2.sql
db2 -td@ -vf db3.sql
db2 -td@ -vf db4.sql
db2 -td@ -vf db5.sql

Notice that I specified the -l option while running the db2look command, which means it will generate the DDL statements for user-defined table spaces, database partition groups and buffer pools. Check the sql script and change the location path to match the Windows environment before executing them. Something like:

/home/db2inst1/db2inst1/blah/path3/db2inst1_data.tbs’30000 to C:\db2inst1\blah\path3\db2inst1_data.tbs’30000

Else, you’ll get a ‘Bad container path’ error.

I prefer to pipe the result to a file so that I can review it later. Most of the time I wasn’t able to monitor the output since some of the databases are pretty huge and I worked remotely with a lousy, lousy network connection (I love rdesktop for this).

By this time, my databases contain all the tables as the original databases on linux do. But of course, they’re all empty.

Normally, there shouldn’t be any problems until you come to the data loading part (no database connection needed).

db2move db1 load
db2move db2 load
db2move db3 load
db2move db4 load
db2move db5 load

db2move utility will also create an output file based on the action that you specified (in my case, it’s LOAD.out), so I don’t have to bother piping the result to a file.

If this part ended successfully, you’re all done. Unfortunately for me, there are warnings inside the LOAD.out files. I have 5 LOAD.out files altogether, and 4 of them contain the same warning code:

* LOAD: table “DB2INST1″.”RQVIEWS”
*** WARNING 3107. Check message file tab52.msg!
*** SQL Warning! SQLCODE is 3107
*** SQL3107W There is at least one warning message in the message file.

So what’s in tab52.msg?

SQL3229W The field value in row “1” and column “9” is invalid. The row was
rejected. Reason code: “1”.

SQL3185W The previous error occurred while processing data from row “1” of
the input file.

SQL3229W The field value in row “2” and column “9” is invalid. The row was
rejected. Reason code: “1”.

SQL3185W The previous error occurred while processing data from row “2” of
the input file.

SQL3229W The field value in row “3” and column “9” is invalid. The row was
rejected. Reason code: “1”.

SQL3185W The previous error occurred while processing data from row “3” of
the input file.

SQL3229W The field value in row “4” and column “9” is invalid. The row was
rejected. Reason code: “1”.

Data type mismatch? To be frank, I don’t know, but as I reviewed back the db2move options, there’s one that I have probably missed.

-l lobpaths

LOB stands for Large OBject. A large object (LOB) is a string data type with a size ranging from 0 bytes to 2 GB (GB equals 1 073 741 824 bytes).

So, if you know where your lobs are, specify this option while exporting the data, and make sure to check that you have files with names similar to this when you’re done.

tab52a.001.lob

Being a complete noob in the world of db, I don’t know where the lobs are. In fact, I don’t even know what it means the first time I encountered it (no wonder I purposely ignored the -l option in the first place lol). So, I decided to export the db on linux once again and dump it straight to Windows, on the fly. This way, even without specifying the -l option, it will export your LOBs as well. Nice.

On Windows, I dropped all the databases that I’ve created since I prefer to have a fresh start. Now all I have to do is access the databases on linux remotely from my db2 on Windows.

db2 catalog tcpip node dbonlinux remote 10.8.8.230 server 50000

dbonlinux – an arbitrary name for the node I created.
10.8.8.230 – IP address of the linux(remote) server.
50000 – the iiimsf port used. This is the default port.

db2 catalog db db1 at node dbonlinux
db2 catalog db db2 at node dbonlinux
db2 catalog db db3 at node dbonlinux
db2 catalog db db4 at node dbonlinux
db2 catalog db db5 at node dbonlinux

db2 terminate

Now I can connect to my linux db remotely from the Windows server by using this command:

db2 connect to db1 user db_username using db_password
db2 connect to db2 user db_username using db_password
db2 connect to db3 user db_username using db_password
db2 connect to db4 user db_username using db_password
db2 connect to db5 user db_username using db_password

If you failed to connect, check if you’re using the correct port.

To check which port to be used on the server that you wish to access to:

1) db2 dbm cfg | grep SVCENAME

Most of the time it’ll return the service name instead of the port, so find the port number by the service name from the services file.

Now that am successfully connected, I run again the db2move command.

db2move db1 export

And I did the same with the rest of the 4 databases. This time when I checked, LOBs are exported as well. Coolness.

Remember to disconnect from the database that you’ve accessed remotely. You wouldn’t want to mess with the production database. As for me, I won’t be needing to access the remote database again, so I removed the database alias and the node I’ve created.

db2 uncatalog db db1
db2 uncatalog db db2
db2 uncatalog db db3
db2 uncatalog db db4
db2 uncatalog db db5
db2 uncatalog node dbonlinux

Create all the 5 databases again with db2 create db <database_name> command.

I ran again the sql script generated by db2look, and load the data using db2move command and that’s it, I’m done.

But, am not so lucky. Only 3 out of 5 databases were managed to be exported successfully without any errors. To be honest, am pretty devastated at this point.

Further checking revealed that during the execution of the sql script generated by db2look, the table spaces were not created because of bad container path. I was completely dumbfounded because the container path was good, seriously. Aargghhhhhhhhhhhhhhhhhh! I’ve decided to proceed without the table spaces and create them manually afterwards.

All these while I’ve been doing db2move in load mode. With db2move <db_name> load, you will have to have the tables created on the database first, else, you’ll receive tons of errors. With import, you don’t. So, for the databases that I’ve failed to load the data in, I did import instead. Again, I dropped the databases and recreate them for a clean start.

db2move db1 import
db2move db2 import

Success. Cool.

Now that the tables are all imported, I created the necessary table spaces manually, matched the names listed in the sql script generated by db2look file.

Run the sql script generated by db2look.

I’m DONE!

And that’s what I thought. Bleargh.

Well ok, 95% I’m done, with all the exporting and loading, which is the crucial part anyways.

VERIFYING INTEGRITY

The final part is to check the integrity of the migrated database.

When I first select * from table_name I encountered this error:

SQL0668N Operation not allowed for reason code “1” on table blah.db1. SQLSTATE=57016

More info at https://publib.boulder.ibm.com/infocenter/db2luw/v9r5/index.jsp?topic=/com.ibm.db2.luw.messages.sql.doc/doc/msql00668n.html

Run the following command and all’s good:

db2 set integrity for <table_name> immediate checked

To check the which tables are in pending state, run the following command:

db2 select tabname from syscat.tables where status=’C’

The output is a list of tables that requires the execution of the set integrity statement. It’ll be lovely to have a script or a single command that can set the integrity on the affected tables, rather than doing it one by one for each table.

Yea, I’m DONE 😀

Hope I didn’t miss out anything.
Recommended readings:

Using DB2 utilities to clone databases across different platforms

DB2 Version 8 Connectivity Cheat Sheet

DB2 Backup Basics

DB2 Backup Basics – Part 2

DB2 Backup Basics – Part 3

March 20, 2008

osx86 on vmware server.

Filed under: Tech — Tags: , — od @ 6:53 pm

My 3rd attempt in installing osx86 onto my vmware server ended successfully. I know there are a number of guides/howto on this topic, but think I’ll just post the settings that I used in order to get it work.

Machine: DELL Inspiron 640m
Platform: Ubuntu Gutsy
Console: VMWare Server 1.0.4
osx86 version: Jas 10.4.8 AMD Intel SSE2 SSE3
References: PCWiz Computer and AsenDURE

Note: This is just for fun/test purposes.

Virtual Machine Configuration: Custom
Guest Operating System: Other – FreeBSD
No. of processors: One
Access Rights: Make this virtual machine private – checked
Memory: 512MB
I/O Adapter Types: LSI Logic
Disk: Create a new virtual disk
Virtual Disk Type: IDE
Network Connection: Use host-only networking

My vmx file:

##########################################################

config.version = “8”
virtualHW.version = “4”
scsi0.present = “FALSE”
scsi0.virtualDev = “lsilogic”
memsize = “512”
ide0:0.present = “TRUE”
ide0:0.fileName = “OSX.vmdk”
ide0:0.writeThrough = “TRUE”
ide1:0.present = “TRUE”
ide1:0.fileName = “/dev/scd0”
ide1:0.deviceType = “cdrom-raw”
floppy0.startConnected = “FALSE”
floppy0.fileName = “/dev/fd0”
Ethernet0.present = “TRUE”
Ethernet0.connectionType = “hostonly”
Ethernet0.virtualDev = “vlance”
displayName = “OSX”
guestOS = “freebsd”
priority.grabbed = “normal”
priority.ungrabbed = “normal”
powerType.powerOff = “hard”
powerType.powerOn = “hard”
powerType.suspend = “hard”
powerType.reset = “hard”
paevm=”true”

floppy0.present = “FALSE”

##########################################################

Previously on my 2nd attempt I mounted the osx86 iso file using the mount iso command, and point the CD-ROM to the mount point but it just didn’t work. So I skipped the hassle and burned it straigt onto a DVD.

With the above setting I don’t have any problem in booting the DVD. I did configure the VM BIOS though just to improve the performance.

Boot the CD, and the launcher will start. After creating a partition using the Disk Utility, I relaunched the installer: Installer > Restart.

Upon rebooting, you may or may not get a ‘b0 error’. It depends on your boot sequence setting. To solve this, just go to BIOS and set your CDROM as the first boot device. F10, and you’ll get back to the launcher again. This time, you can proceed with the installation. At the end of the installation, the vm will restart. If all goes well, Mac OS X should boot perfectly.

Done 😀

osx86.

March 14, 2008

Install WAS Base/ND v6.1.0 on Ubuntu Gutsy.

Filed under: Tech — Tags: , , , — od @ 2:07 pm

There are 2 things that need to be configured in order to install WebSphere Application Server Base/ND on Ubuntu Gutsy successfully – tested using WAS Base/ND v6.1.

1 – Ubuntu Gutsy links sh to dash instead of bash. There won’t be any error during the installation of WAS itself, but you will not be able to create any profile, so it’s useless. Two ways to fix this, either remove the symlink and relink it to bash, or change the shebang line inside the WAS install script from #!/bin/sh to #!/bin/bash. Changing the default shell from dash to bash may cause your system slower since dash is lighter than bash, but I think it is hardly noticeable. More info at https://wiki.ubuntu.com/DashAsBinSh.

2 – This applies to WAS ND, I didn’t encounter any issue with Base. If you’re having a problem in getting the dmgr server up, and the error in the SystemOut.log is something like this:

[3/12/08 15:38:06:539 MYT] 0000000a LogAdapter E DCSV9403E: Received an illegal configuration argument. Parameter
MulticastInterface, value: 127.0.1.1. Exception is java.lang.Exception: Network Interface 127.0.1.1 was not found in
local machine network interface list. Make sure that the NetworkInterface property is properly configured!
at com.ibm.rmm.mtl.transmitter.Config.<init>(Config.java:238)
at com.ibm.rmm.mtl.transmitter.MTransmitter.<init>(MTransmitter.java:192)
at com.ibm.rmm.mtl.transmitter.MTransmitter.getInstance(MTransmitter.java:406)
at com.ibm.rmm.mtl.transmitter.MTransmitter.getInstance(MTransmitter.java:345)
at com.ibm.htmt.rmm.RMM.getInstance(RMM.java:128)
at com.ibm.htmt.rmm.RMM.getInstance(RMM.java:189)
at com.ibm.ws.dcs.vri.transportAdapter.rmmImpl.rmmAdapter.RmmAdapter.<init>(RmmAdapter.java:218)
at com.ibm.ws.dcs.vri.transportAdapter.rmmImpl.rmmAdapter.MbuRmmAdapter.<init>(MbuRmmAdapter.java:76)
at com.ibm.ws.dcs.vri.transportAdapter.rmmImpl.rmmAdapter.RmmAdapter.getInstance(RmmAdapter.java:133)
at com.ibm.ws.dcs.vri.transportAdapter.TransportAdapter.getInstance(TransportAdapter.java:161)
at com.ibm.ws.dcs.vri.common.impl.DCSCoreStackImpl.<init>(DCSCoreStackImpl.java:178)
at com.ibm.ws.dcs.vri.common.impl.DCSCoreStackImpl.getInstance(DCSCoreStackImpl.java:167)
at com.ibm.ws.dcs.vri.common.impl.DCSStackFactory.getCoreStack(DCSStackFactory.java:92)
at com.ibm.ws.dcs.vri.DCSImpl.getCoreStack(DCSImpl.java:84)
at com.ibm.ws.hamanager.coordinator.impl.DCSPluginImpl.<init>(DCSPluginImpl.java:238)
at com.ibm.ws.hamanager.coordinator.impl.CoordinatorImpl.<init>(CoordinatorImpl.java:322)
at com.ibm.ws.hamanager.coordinator.corestack.CoreStackFactoryImpl.createDefaultCoreStack(CoreStackFactoryImpl
.java:82)

Chance is you have not assigned an IP address for your hostname, except for the default 127.* address. If this is the case you won’t be able to federate nodes to the Dmgr as well. So edit your hosts file. Since Edgy the hostname was split to 127.0.1.1, so you will see 127.0.0.1 is assigned to a localhost, and 127.0.1.1 to your hostname. Assign your hostname to 127.0.0.1 as well, and problem solved. But if you plan to do some nodes federation, then assign an IP for your hostname. Your hosts file should look something like this:

127.0.0.1 localhost YourHostName
127.0.1.1 YourHostName

Done.