Monday, December 31, 2012

Pro Oracle Database 11g RAC on Linux

This is not intended as a complete book review, but a warm recommendation of a book I've found very useful the last couple of months. That is, highly recommended if you are working with RAC or going to start with a RAC project soon.
I have been working on and off with RAC for several years, but I don't take any new RAC project lightly. Starting in November I was involved with a proof of concept for a large public customer. Before and during this project I have relied a lot on this book: Pro Oracle Database 11g RAC on Linux written by Steve Shaw and Martin Bach.

True, installation of RAC has become much easier on 11gR2, but the planning part is as important as ever. If you jump into the project without proper planning you are likely to encounter problems later on after the system is installed and running (more or less). Changing network layout or mostly anything else is a hassle later on, though this has also become easier with the added functionality to the commands crsctl and srvctl in 11gR2.

One reason I like this book as an important addition to the Oracle documentation is that it is very logical written from a project point of view. It starts out with a good introduction and then explains the important concepts and the arcitechture. This is stuff you need to get right before you start. There are also many decisions you have to make before you start. The book builds a solid ground under your cluster with a good focus on the OS (Linux in this case, but I think I will recommend  it for a similar project on Windows as well due to the book's structure). In other words, if you don't have time to read the necessary chapters in this book you will not have time to fix the errors later on either.

We're not exactly masters of logistics in this country, and I came in late for this project. I started the preparation for the project by reading this book, when I had read enough and felt prepared for the next day I could put the book aside and get some rest. The next day I would continue by reading and planning. Well, that might read as if this was my first RAC project, it was not, but again I don't imagine I remember every detail, the book serves as a checklist.

The project gave us several challenges due to a rather big database that we created from an unconventional RMAN backup,  Real Application Testing (RAT), super fast hardware that still needed tweaking on OS level. But not once did I run into problems due to wrong configuration of RAC or errors we could blame the RAC software.

Have a nice 2013! Go to conferences, user group meetings, and meet all the nice people in the community. Our conference, OUGN 2013, will be better than ever; we will have Oracle experts from Australia, USA and Europe.

Friday, September 28, 2012

Oracle Open World - lucky me

Last year was my first time at OOW. I made a write-up every day on my blog, but this year I will maximize, meaning it will be little time for blogging while being there. It is now less than four two one hours before I start on my trip OSL-SFO. Writing about what I plan to do seems like a great way to kill time. Also I'm doing some proactive jet-lag prevention research, may be staying up late will make the transition from CET to PT easier (or is it CEST and PST now?).

I have a schedule packed with interesting stuff, but based on experience from last year, I expect to feel some conference fatigue setting in around noon at Wednesday. I may just as well see Mogens in his office.

Besides all these good presos I am really looking forward to two events. One is the Oaktable World (formerly called Oracle Closed World; guess what happened). Good thing it is early in the week from Monday to Tuesday. Those guys are not giving out 101-courses, and will demand my attention.

The second event is the Bloggers Meetup organized and held by Pythian. This may very well be the best networking opportunity at OOW :-) Last year I met a bunch of people I have known virtually through Twitter and their blogs for years. Nothing beats a conversation in person. You will be surprised how many nice people there are out there that share our somewhat narrow interest.

I'm always interesting to talk to potential speakers at our annual conference. If you have any questions regarding our user group (Oracle User Group Norway) or our conference, please get in touch. Btw, here is our call for paper

Paying slightly attention to Twitter, I just noticed a link to the kick-off for the IOUG Big Data SIG. Think I want to check it out too. Not sure what I can make out of this Big Data thing, but it seems to attract a lot of smart brains.

Luggage is ready, and it includes a cool t-shirt from Method-R, Oaktable World will be a nice occasion to wear it.

Saturday, September 15, 2012

VirtualBox 4.2 released

Version 4.2 of VirtualBox was released two days ago (so far for Linux only). Download it from here:

Probably not important, but I'm using Fedora 16 for the moment, and since I was running VirtualBox 4.1 I had to remove the previous package first and install the 4.2 version. This will not destroy the guest hosts.

rpm -e VirtualBox-4.1
rpm -ip VirtualBox-4.2-4.2.0_80737_fedora16-1.x86_64.rpm

After a new start with the VirtualBox Manager the old VM guests are in place.

Easier installation of Oracle on Linux

This is more a note to myself... Installing Oracle on Linux has become much easier with a package that prepares the OS before installation of the Oracle Server software. In previous versions the rpm package used to be called oracle-validated, but for 11gR2 on OEL6 it is called oracle-rdbms-server-11gR2-preinstall.

Probably the easiest way to install the Oracle database on Linux is to install Oracle Enterprise Linux 6 and before the installation of Oracle do two things:

1. Enable the public yum repository at Oracle:
cd /etc/yum.repos.d
Edit the file and enable the relevant version by setting enabled=1.

2. Install the package that prepares the server for Oracle database installation:
yum install oracle-rdbms-server-11gR2-preinstall
No need to run any commands after the yum install command. Continue with runInstaller.

Update 2013-07-12:
With the release of database 12c Oracle has released a new package to prepare Oracle Linux 6 for it.

Get the info from Oracle here. The procedure remains the same as for 11g, only the name of the rpm file changes. I installed it on a fresh new OL6 without any extra configuration of yum repository with:
yum update
yum install oracle-rdbms-server-12cR1-preinstall

The outline from Oracle tells you to do step 1 above, but the repo file was included in OL 6.4.

Monday, August 20, 2012

Oracle load testing - part 3 Hammerora results

I learned something very important when doing testing with Hammerora. The documentation is quite good and has a simple, but important point, the importance of planning and preparation. To quote the documentation for Oracle OLTP testing:

Planning and Preparation is one of the most crucial stages of successful testing but is often overlooked. Firstly you should fully document the configuration of your entire load testing environment including details such as hardware, operating system versions and settings and Oracle version and parameters. Once you have fully documented your configuration you should ensure that the configuration is not changed for an entire series of measured tests. This takes discipline but is an essential component of conducting accurate and measured tests.
From this I conclude that many tests I've seen and done myself have not been accurate. As stated earlier the goal for this testing was just to compare performance before and after migration from EVA to 3PAR.

Since this customer did not have the required license to run AWR I did the following change in the driver scipt to create a Statspack snapshot in place of a AWR snapshot. Search for string containing dbms_workload, replace
set sql1 "BEGIN dbms_workload_repository.create_snapshot(); END;"
set sql1 "BEGIN perfstat.statspack.snap; END;"

When testing with Hammerora I decided to run each test three times to see if the numbers where consistent. I recorded the numbers of each run in an spread sheet as shown in the following table for the tests on EVA:

Vusers Run Report tpm nopm Avg_tpm
1 1 1_2 7158 2369 7634
1 2 11_12 7874 2645
1 3 21_22 7868 2804
3 1 31_32 16478 5765 17317
3 2 41_42 17678 6256
3 3 43_44 17794 6130
10 1 45_46 27847 9959 33225
10 2 51_61 32581 11600
10 3 71_81 39248 13701
20 1 91_101 47489 17441 47075
20 2 111_121 63062 22658
20 3 131_141 30674 11116
30 1 151_161 54349 19756 44186
30 2 171_181 45628 17331
30 3 191_201 32581 12733

Vusers is the number of virtual users in Hammerora, Run is 1 - 3 for each new setting of Vusers. Report  refers to the Statspack report created on snapshots before and after. Tpm and nopm as reported from Hammerora and finally Avg_tpm is the average in each group. Compare this to the numbers for the 3PAR:

Vusers Run Report tpm nopm Avg_tpm
1 1 9_10 8246 2815 8262
1 2 11_12 7983 2717
1 3 13_14 8556 2956
3 1 15_16 22652 7854 22881
3 2 17_18 22652 7831
3 3 19_20 23339 7994
10 1 21_22 33539 11767 33191
10 2 25_26 39054 13729
10 3 27_28 26981 9428
20 1 29_30 47134 16462 47356
20 2 31_32 46436 16330
20 3 33_34 48497 17023
30 1 35_36 53197 18902 50788
30 2 37_38 44980 15994
30 3 39_40 54187 19033

The repeated tests for the same number of virtual users do not vary as much on the 3PAR compared to the EVA. Also the numbers for the EVA seemed to improve for each run, maybe due to some caching taking place.

The 3PAR seemed to be more reliable for the same number of virtual users as can be seen in these screen captures, the first for 20 virtual users on EVA:

The peformance on the 3PAR does not change much during the test (20 virtual users):

You'll see that in one instant the EVA seems to perform better, but I rather have stable and less erratic performance with the 3PAR than a system with occasional good performance.

All in all it was very easy to play around with Hammerora, it is very easy to set up so you can spend time on planning and executing the tests. Also I like how you can observe change of performance over time. Clearly Hammerora is a tool I will use more later.

Wednesday, June 6, 2012

Oracle load testing - part 2 ORION results

Last week I was running load tests in between other tasks on my agenda. As stated in previous post I wanted to compare performance, as measured by load testing tools, on an old SAN and a new one, EVA and 3PAR respectively. I ran two types of tests, the OLTP test and the SIMPLE test:
orion -run oltp -num_disks 50
orion -run simple -num_disks 50

Following are charts from the ORION OLTP test with "num_disks" equal to 50.

Test was running against one disk (LUN) pr execution. Only test-server has been migrated to 3PAR and ORION were run before and after. Not surprisingly the 3PAR can achieve higher number of I/Os per second as the load increases. The EVA seems to behave erratic or breaking down after some point. This is confirmed in the following chart for latency:

The wild values from one of the EVA-tests throws the chart out of proportions. But it is still possible to see that the latency is within reasonable limits. By limiting the values from 0 to 50,000 microseconds the chart looks like:

Looks like I could have expanded the range when testing the 3PAR. Latency is within 10-32 ms the whole range. Latency is the most important measure for databases IMHO, and after testing this repeatedly on 3PAR I expect to have more stable performance after migration.

Next chart is a histogram for the latency values:

The main difference between them is the outliers on the right side for the old EVA.

The following charts are from the SIMPLE test. In this test the pattern is less erratic for EVA, but still the 3PAR performs much better and seems to be very happy as the load increases.

The SIMPLE test measures throughput with MBPS. Here large I/Os are used to simulate a situation where throughput is more of a concern. The pattern resembles that of the previous picture, and the throughput of the 3PAR is amazingly stable. I'm wondering if it will be just as stable when we move production next week.

The last chart shows the latency of the SIMPLE test, small I/Os where used:

In summary: The ORION tests indicates that the 3PAR is much more stable, and by looking at the results for latency more fit for serving a database server than the EVA. The pattern on the EVA was similar on both production and testing servers, which means that the extra load from other users in production did not explain the oscillations.

Next post on Hammerora will show a similar pattern. I learned a lot by reading the documentation of Hammerora. It is very educational by giving good advices on how to plan and run tests. I felt my ORION testing was a bit inferior compared to the rounds with Hammerora after I finished them, but I still think the results from ORION are interesting. More on that in a few days.

Monday, May 28, 2012

Oracle load testing - part 1

The current customer is migrating from EVA storage to 3PAR, and inspired by some discussions on Twitter between @yvelikanov, @martinberx, @kevinclosson, @martinDBA, and others I decided to try out some free tools for load testing of Oracle servers. I can hardly tune a SQL statement without getting philosophical about it, so especially this first part is a bunch of thoughts I had when starting out.

There are a lot of discussions on the usability of such tools; are the measured performance any close to the reality? The load these tools generate is more or less synthetic. One tool that stands out from the crowd is the Real Application Tool (RAT), but RAT requires an extra license from Oracle. With RAT you can capture real load on a production system and replay it on a test system with new or upgraded hardware or configuration. The reports generated in Enterprise Manager lets you compare the performance of old and new system so clearly that even your CEO can understand it. You cannot get much closer to reality than with RAT, but as said, it costs money and time to set it up. [And for a change I will not whine much more on the fact that SMBs around here do not invest in these licensed packages that makes the life of the DBA easier.]

Back to the question on reliability of these tools; the fact that the load is often quite artificial and likely to be very homogenous (or randomized without the skew you have in your real world) means that they give an upper limit you are not likely to reach in normal production. That was one of the reasons why Kevin Closson wrote the Silly Little Oracle Benchmark (SLOB). After having played with this I have reached the preliminary conclusion that these tools are useful for comparing configurations and behavior more than ranking your system.

The last days my mind has started to compare this with the infamous buffer cache hit ratio (CBHR) - see this post over the subject on Jonathan Lewis' Blog. Though the BCHR is quite a meaningless measurement you still may know what it use to be for your most important databases since there are several monitoring systems out there that still come with an Oracle module that measure it and sends you an alert if it is too low (too low according to the assumptions Lewis and others have attacked long time ago). But what if your BCHR suddenly changes, what does that mean? It means just that: something changed. And that may be worthwhile investigating. Think of your kid that always make some noise, though silence is welcome, if he suddenly stops his rattling, you want to know why, silence is not always a good sign. Translated back to the BCHR, an increased ratio could be caused by some sessions reading the indexes en masse when a FTS would have performed better.

And what is the connection to load testing? By repeating the tests several times and seeing the same pattern again and again the tool confirmed itself and when running it against another system this pattern changed, but remained consistent on the new system. The measured limit itself is probably off the reality, but I think it is reasonable to assume that when the performance becomes erratic and unstable (that means widely varying results that are not linear or exponential linked to the increasing load, or less fancy, the graph you are looking at goes up and down on the right side) you have found some upper limit of the load the SAN can take. If the performance on the new SAN remains stable for more load compared to the old one, it will probably perform better in real world production also, just don't believe you've found the correct upper limit for your application. That was pretty much our ambitions, compare old and new SAN, and possibly discover any obvious configuration errors.

In my case I wanted to do some measurements of I/O performance so I could compare the old EVA with the new 3PAR SAN. First out was ORION (Oracle IO Numbers). ORION is included with the database software, you'll find it under $ORACLE_HOME/bin. Or you can download it from page I just linked to. The general idea with ORION is that it executes more or less the same code path as the Oracle database does when doing I/O. There is no application testing involved, just a bunch of I/O operations thrown against your storage. You can test small and large I/O to simulate the load from OLTP, DSS, or mixed environments. ORION is supported on several platforms including Windows. ORION reports latency (in microseconds), throughput (megabytes pr second), and I/O operations per second (IOPS). The results are written to text files in CSV format that can easily be imported to Excel for easy visualizations (or much better, imported to your precious database and devoured with Apex).

I tried to have SLOB work from Windows, but couldn't find an easy way to replace the semaphore mechanism Closson used on Unix to start all sessions at once. I could have written some in Java or other, but I really didn't have the time and it seemed to be an over-kill.

Next out was Hammerora. It uses Tcl/Tk to run and is also supported on various platforms, but it was very unstable on the Windows 7 32-bit I was using. It would run for a while and suddenly at a random point (sometimes after many minutes of testing) encounter an error with Tcl/Tk needing a restart. Think this was caused by the combination of the OS and Tcl/Tk.

Really wanting to test both SLOB and Hammerora I installed VirtualBox on the PC and started with Hammerora there. Graphs and numbers coming this week, but in general the results from last week was that ORION indicated stable and consistent better performance on the 3PAR with less erratic pattern. Hammerora actually reported better results from VBox than when running directly from host OS (Windows 7). Depending on how much idle time I'll have the coming weeks I'll try out SLOB, Swingbench and more trying to compare the two SANs.

Thursday, April 5, 2012

Analyze your social networks with ThinkUp

... on a server in the sky

I had two things on my agenda, test out HP Cloud and ThinkUp. HP Cloud is still in private beta, but I was lucky to get an invitation after registration. They aim to compete with Amazon Web Services (AWS) and so far I think they have a good chance. It is rather easy to just order and set up a new server in the cloud. A simple and easy interface with few buttons. It is tempting to include a few screen shots here, but I'm not sure if the agreement I just signed allows it. If they expand the interface with more features I hope they keep this simple version for impatient people who just need a server right now. I've had great fun with AWS also, but it is crowded with features and things to figure out. For my need of testing a web application the HP Cloud seems just perfect.

ThinkUp is a tool for you to analyze how you stand in your different social networks. It also has the added benefit that it actually does a backup of all your posts to your own private spot. Just go to their web site and check out the features. It does require a web server with php 5.3 and MySQL; a LAMP server in other words.

In short this is what you have to do:

  • Create a compute server with CentOS 6
  • Create the ssh-keys and test connection with ssh
  • Install packages for MySQL, Apache and PHP
  • Configure database, web server and php
  • Download ThinkUp and unpack it in the webserver's public html directory
  • Access ThinkUp and start setting up connections to your social networks
  • Enjoy the reports on your digital social life.

I created the smallest version of a compute server (standard.xsmall) at HP Cloud with CentOS 6.2 Server as image. It comes with 1GB of memory and two partitions of 10GB and 30GB disk space. When the server is ready with status 'running' CentOS is already installed with a root partition of 10GB. You don't need more to run ThinkUp, but you have another 30GB partition available if you want to format and mount. First time you need to generate a key pair to be used with SSH when you want to connect to the server. This is the same mechanism that AWS uses and works perfectly on Unix/Linux and on Windows. Once generated you store the private key in a text file, not to be lost unless you want to scratch the servers that depends on them. Since I also have a Windows laptop, I had to make a version of this key that works with Putty. The tool PuTTYgen can be downloaded from the same place as Putty To convert to a key that Putty can use you select Conversions -> Import key, select the file with the key from HP Cloud and save the converted private key in a new file (do not overwrite the original key). I used SSH-2 RSA which seems to be the default.

Before I could connect to the server I had to attach a public IP. This is of course done from the management console at
To connect with Putty I entered the assigned public IP address, loaded the key file under Connection -> SSH -> Auth. Login as root and the connection is established without a password since you are using a private ssh key.

From my MacBook I connected with:

ssh -i original_key_file ip-address

The same works on Linux or most other mature OS.

Before ThinkUp can be installed you need an http server that supports php 5.3 or higher, that is, a LAMP server. I used this guide: It contains a link to a guide on how to set your hostname and timezone. (If you change the timezone, remember to repeat it after running 'yum update', since it resets the link /etc/localtime).

Download the ThinkUp software with

cd /tmp

This downloads the file zip-file, in my case Unpack it under the public_html directory, using as hostname hereafter:

cd /srv/www/
unzip /tmp/

With web server, PHP and MySQL up you are ready to configure ThinkUp. Open up a browser and go to You probably will see this:

Just do what it says, as root:

chown -R apache /srv/www/

Then hit F5 to reload page. Next screen will look like:

Hit the link in the page to configure ThinkUp for the first time. Various requirements are checked and in my case I was missing GD and PDO; whatever that is.

Hit the links to continue (See? You get everything almost free of efforts here). OK, maybe it was not obvious what to do get GD installed (hint: click Installation on next page if you want to know how). To install GD for CentOS I did:

yum install php-gd

PDO for PHP and MySQL is installed and configured with:

yum install php-pdo
yum install php-mysql

Restart web server and MySQL after this, then reload page, the configuration continues to next page when all requirements are in place. On next page you create your ThinkUp account and configures the connection to the MySQL database. Before you can do that, create the database for ThinkUp and a user. Connect to MySQL as root:

mysql -u root -p

Create database and user with:

create database thinkup;
create user 'thinkup'@'localhost' identified by 'strongpassword' ;
grant all on thinkup.* to 'thinkup';

Then enter respective information on webpage. On the next page you will be presented by an error message saying that the apache user cannot write to As usual you are told what to do. Execute this as the root:

sudo touch /srv/www/
sudo chown apache /srv/www/

This is enough, you don't actually have to copy and paste the script presented in the window below.

Hit next and you are finished with the installation.

After you have verified your ThinkUp account (check your mail) you can login to your ThinkUp web application and start adding accounts for your social networks. Happy analyzing, remember it is about the network.

When playing with this I once again understood that Twitter is the most important social network for me, I'm following people who shares a lot of useful information, usually quick quotes and links to good stuff. If you have attention deficit, Twitter is perfect, 140 chars and then time to think. It was through a few tweets I discovered that HP had opened their beta program. In another about ThinkUp as an alternative to Klout. Facebook is less important to me and I often forget to check it, it is just a collection of faces... Google+ is used by some of my Oracle peers, but it is just much easier to check Twitter when I'm on the run. Links to good stuff is either checked out immediately or sent to Read It Later.

I've met quite a few that are frustrated by the tardiness of many IT departments, getting a new server takes too much time. Having the possibility to create one in the cloud, in minutes, from a easy to use web interface such as what HP Cloud offers; that is something many will consider.

Sunday, April 1, 2012

Extract SQL from trace file with Perl

Perl is included with the Oracle database software (SE and EE), even on Windows. I wrote this to extract the SQL statements from a trace file (generated with sql_trace=true or setting the 10046 event). A simple indentation - one tab for each level - is used to show recursive statements.

while (<>) {
($level)=$_=~/\s+dep=(\d+)\s+/ ;
while ($line!~/END OF STMT/){
for($i=0;$i<$level;$i++) {
print "\t" ;
print $line;
print "\n" ;

You usually have to expand your PATH on Windows to find Perl; you'll find Perl.exe somewhere below %ORACLE_HOME%. The code will of course work on Linux and other OS where you have Perl. Store the code above in a file called and call it with:

perl your_trace_file.trc | more

There is an option (record=filename.sql) in tkprof to extract the SQL, but it does not include recursive statements. I guess the difference is that this is something to build on when you have a more complex task.

Sunday, March 25, 2012

OUGN 2012 Day 2 - Wrap up

The last day started with a master class Key features of redo by Jonathan Lewis. You need quite a good reputation to gather a large crowd to listen on such a subject. Well prepared with good slides, lots of real-world experience together with questions from the crowd made it to a time well spent in the auditorium. He didn't get through all the slides, but that did not matter since we had two hours filled with entertaining learning.

At 11:30am I attended the presentation Falling in Love all over again - OEM 12c Performance Page Enhancements; you can guess by reading the title who gave the presentation. Clearly this was a presentation Doug loved to give, and he praised the new improved functionality and interface. He made some comparison to other methods of tuning/analyzing a performance problem; a subject I still find interesting. The presentation was for the most part a live demo with SwingBench and virtual machines, no Death of PowerPoint after three hours of sleep. The only problem I have with these features is that many customers around here do not purchase the managements packs needed for all the fun. But I hope more people will understand the necessity of having this software. I had to leave the presentation early because of a board meeting, really a lunch with the sponsors. Doug, if you read this, now you know why two of us left early, not because you did less than splendid.

After lunch there was a talkshow with the main sponsors and with Jonathan Lewis, Maria Colgan, Stephan Janssen, and VP of EMEA Andrew Sutherland, hosted by board member Alice Rossman. Alice does this very well, by now it has probably dawned on her that she will have to do this every year. I like to hear stars in the Oracle sphere talk about what matters outside the office. Andrew Sutherland was the other Scottish funny guy. Think we should make sure that we have at least two of them every year, it is something with their humor; their special love for the Brits and that accent.

Staying up late with good company several nights in a row makes you tired. I attended only one session after lunch. Using PL/SQL Hierarchical Performance Profiler with Bryn Llewelyn. I'm not writing much code these days, but knew this is useful and something I want to know about when I need it. Bryn speaks very clearly in his presentations and easily engage in discussions afterward.

The feedback we received throughout the conference was vey nice, the level of the sessions and choice of content. Making a program like this is much more than sending emails to a lot of famous people. We need to find a variation on subjects, verify against feedback from last year and also have content that pushes the members forward. Making the program in itself takes a lot of effort, but clearly paid off.

This year we had hands-on labs. Not all where successful, others had a great turn out like RAC Attack. We learned a lot from this, and I believe that a one day conference in Oslo before the main conference will be repeated. We also invested in the new Java track; it was a good start, made some new ground we can build on further.

I had a great conference and feel pride that so many foreign speakers want to come and present at OUGN. Since we are not scaling back I expect to see most of them again together with other genius. We might have some logistic challenges like a larger ship.

Personal ambitions for next year is convincing the rest of the board that we need to find some way to have Cary Millsap from Method R, and one or more from Pythian (like Gwen, Alex and Yury). We also need to encourage more Norwegians to present user cases.

Finally, I want to thank my employer Keystep Consulting for letting me spend many hours otherwise billable in order to prepare for the conference. I am proud of the team there and being part of it.

Friday, March 23, 2012

OUGN 2012 Day 1 - At sea

This is my first conference as a board member, and going to a presentation for every slot is not possible. I did go to Doug Burns presentation on SQL Plan Management in 11g though I had actually seen it at OOW 2011, probably because I like Doug's presentation style. This preso does not go into very much detail on SPM, but takes us on a travel through the subject and leave it to us to actually get some experience on it. It does look interesting, and the only thing that keeps me from getting much experience on it is that the current customer is lagging behind on the upgrades to 11g; the few we have on 11g are small and have not presented any problems with plan instability.

Maria Colgan's presentation on how to collect statistics was a big hit. A colleague stated this was perhaps the best presentation he had been to ever. Very entertaining presentation and packed with useful take aways. Yes, I did go to the same presentation at OOW 2011 as if I'm unwilling to expose my brains to something new. But in fact I learned something new, and I still have something to try out, for example next time I encounter a database that uses partitioning, sigh.

The RAC Attack team set up in the public area and a couple of enthusiasts insisted to complete the lab from Day 0. The intention was not only to have a lab on board, but also a place where people could come and go, and ask questions on RAC in particular and about life in general. Think it worked well, but obviously the hands-on lab was competing with the sessions, and most concluded that lab was something they could try at home later.

The feedback we got after Day 1 was great, but I'll save that to the wrap-up.

Wednesday, March 21, 2012

OUGN 2012 Day 0 - The Martins

23:48, so tired that a Twitter message would be more appropriate. But, just wanted to make it for the record that today turned out to be a very pleasant day. I had the responsibility to host track #5, which started with Martin Büchi on Information Lifecycle Management in OLTP DBs with Partitioning. ILM is a subject that many postpone because it is too complicated. In Martin's presentation he showed a reference model, illustrated various scenarios with alternative solutions. He then showed how partitioning can be used to archive data that has been aged out. I cannot possible make justice to his presentation, but since I write about it here I'm sure I'll remember to come back to it, download it and digest it. If Mr Büchi reads this, he has it on record that he did well. Hope he'll be back again.

Two hours later, two other Martins took the stage. Martin Paul Nash and Martin Bach introduced RAC Attack for the first time in Norway. I've been excited about RAC Attack since I took the courage to recommend it for the OUGN Board, and after getting the approval, inviting them over. They did not disappoint at all! The turnout was quite good; 20 people filled the room and started to hack on VirtualBox, VMware and Oracle. After lunch a few wearied off but the rest continued to the end. We where the last to leave the conference area and we got plenty of positive feedback on this. I think I've been tweeting enough on #RACATTACK lately, it was a great experience.

Actually OUGN was blessed by a fourth Martin, Martin Widlake. I couldn't go to his presentation, but head counts and feedback showed his presentation went very well. I did have the pleasure to talk to him during the dinner later. The picture of the Martins where taken at Holmenkollen just before the dinner. Martin's smile on his face pretty much sums up the mood after Day 0. Loved it.

Tuesday, March 20, 2012

OUGN 2012 Day -1

Tomorrow is the big day. For the last 4 years our annual conference has been on a cruise ship from Oslo to Kiel and back. That is two days with presentations, fun and stuff. A big success we are not going to change at least this year. But, for this conference, we sent out too many invitations to Very Important Presenters and got an impressive positive response; we run out of slots pretty fast. (Let's say we are not really #1 when it comes to logistics in this country.) So we decided to extend the conference with one day in Oslo before we board the ship on Thursday. Someone called it a pre-conference, which is unfair to the presenters since they are giving real meat tomorrow. Someone called it a whole-day conference, eh, as opposed to the other days? Anyway. tomorrow we'll have lots of hands-on labs, master classes and what I'm really excited about, RAC Attack.

Tonight we met at the top floor of the conference hotel as some kind of pre-conference beer meetup. I suspect the guys that showed up didn't really know they took part in a meetup, they just went to the nearest bar with the coolest view in the city. As always, it was nice to meet some of the invited guests: Maria Colgan, Christian Shay, Kuassi Mensah, Bryn Llewelyn, and Holger Friedrich, together with the rest and beers.

Some came with a late flight from UK, like the two Martins responsible for RAC Attack. Not sure where Doug is, he is not on tomorrow AFAIK, but he will be very welcome when he shows up. Btw, Maria told me they have the same humour in the presentations. I agree, those two can fool anyone into studying statistics.

More details on the conference here: Let the party begin.