More OS X Tips for Oracle Developers

The latest from my blog:

Fix for Page Load Issues in Safari 5.1.7

Easy Fix for High CPU Usage in SQL Developer on OS X

Run Oracle Forms 11g Applications on OS X Lion

HugePages Configuration and Monitoring

I was recently working with a customer who was experiencing some memory pressure on their Linux database server.
They were running Oracle 10.2.0.5 on Oracle Enterprise Server 5.7.
The buffer cache was 50G and they wanted to increase it to about 100G.

Here is what the memory allocation on the machine looked like; 10G in PageTables.
Read more

OS X Software for Oracle Developers

New on my Tumblr blog: my list of OS X software recommendations for Oracle Developers.

DML Error Logging

Introduced with Oracle 10gR2, DML Error Logging allows you to make standard SQL INSERT, UPDATE, and MERGE statements behave more like the SQL*Loader utility, logging errors to a table rather than failing and rolling back the first time an error condition is encountered.

Read on…

ODBC Connection Fails for MS Access App on Citrix

(Original article on my Tumblr page.)

I ran into an interesting problem recently when attempting to publish an MS Access application with Citrix XenApp. The application pulled data from an Oracle database, so an ODBC connection was created on the Citrix server. What we found was that when a non-privileged user tried to run the application, the connection failed. When the Citrix server admin ran it, however, it worked. Furthermore, as long as the admin was logged in, anybody else could run the application successfully; when he logged out, the users’ connections broke again.

Read more

Discovering Command-line Java Profiling Tools

On my Tumblr blog I review a bevy of command-line Java profiling tools that you may not even know existed. They do, and they come included with the JDK!

Read on…

VMSTAT Tools

As database administrators, we should have the ability to review and evaluate resources on the underlying hardware where our databases reside. Often this review is done during times of stress when there is a current performance problem. Fortunately, there are a number of tools that are useful for determining the resource usage on these platforms. I will focus on the VMSTAT tool in this blog entry.

VMSTAT is a tool on Unix type plaforms that provides an impressive amount of resource information, but is formatted in such a way that it can be difficult to read much less determine when the snapshot occurred if looking at historic snapshots. Here is an example

[root@new-host ~]# vmstat 2
procs ———–memory———- —swap– —–io—- –system– —–cpu——
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 0 1916352 39376 906944 0 0 186 39 1024 1139 1 2 93 4 0
1 0 0 1916352 39384 906968 0 0 0 76 1015 1203 0 2 95 4 0
0 0 0 1916260 39392 906976 0 0 0 58 1014 1118 0 2 98 0 0

Through some simple Unix scripting commands, you can get more user-friendly information from this tool.

The vmstat_tool.sh script below works for most Linux versions:

#!/bin/sh
#vmstat_tool.sh
#vmstat parser script. Makes the information easily readable.

#Ensure that the number of loops and delay parameters
#were passed to this script
if [ $# -ne 2 ]
then
echo “Usage: $0
exit
fi

export LPCNT=$1
export DLAY=$2

c=0

while [ "$c" -lt $LPCNT ]
do
date
vmstat | tail -n 1 | awk ‘{printf(“%d processes are waiting\n%d processes are blocked\n%d virtual memory used, %d idle memory, %d buffer memory, %d cache memory\n%d memory swapped in from disk, %d memory swapped out to disk\n%d blocks per second written\n%d blocks per second read\n%d interrupts per second\n%d context switches per second\nCPU usage: %d%% (user, non-kernel), %d%% (system, kernel), %d%% idle, %d%% wait\n\n”,$1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16)}’

sleep $DLAY
c=$((c+1))
done

Sample Output (Single Loop):

Sat Feb 18 11:15:48 PST 2012
1 processes are waiting
0 processes are blocked
0 virtual memory used, 1990308 idle memory, 32964 buffer memory, 843564 cache memory
0 memory swapped in from disk, 0 memory swapped out to disk
928 blocks per second written
75 blocks per second read
1062 interrupts per second
1128 context switches per second
CPU usage: 4% (user, non-kernel), 2% (system, kernel), 77% idle, 17% wait

As you can see. the output is much more readable than the regular vmstat output.

Should you need to do more than just a cursory examination of the VMSTAT output, you can use these scripts to generate an output file for examination with the following method:

nohup vmstat_tool.sh 288 300 > ./vmstat.log &

In the above example, you are using the native nohup command to run the vmstat_tool.sh script to loop every 5 minutes (300 seconds) for 24 hours (288 5-minute loops). The output will to the vmstat.log script in the local directory and will run in the background by using ‘&’ at the end of the command. Examining the output is a simple matter of using the native cat and grep commands in the following manner:

cat ./vmstat.log | grep “processes are waiting” | more

Sample Output:

0 processes are waiting 0 processes are blocked
0 processes are waiting 0 processes are blocked
0 processes are waiting 0 processes are blocked
0 processes are waiting 0 processes are blocked
0 processes are waiting 0 processes are blocked
0 processes are waiting 0 processes are blocked
0 processes are waiting 0 processes are blocked
0 processes are waiting 0 processes are blocked

You can now see the output of the first line of the VMSTAT tool scrolled one screen at a time. If you see a line that has some abnormal values, you can then use the vi utility to zero in on that particular value and get the timestamp from it.

My crsstat script: Improved Formatting of crs_stat on 10g and 11g

My crsstat shows a tabular format of CRS or HAS resources (i.e. Cluster 10gR2-11gR2 or ASM/Single instance scenario 11gR2). It has been a work in progress for a while. When I first started working with 10gR2 RAC, I quickly discovered that I didn’t quite like the formatting of the default crs_stat -t output, so I wrote my first version of the crsstat script, which consisted of a korn shell script and an awk script (which gave pretty close to the same output as the current version). I enhanced it at some point to show a different color, if the state didn’t match the target state. Then 11gR2 came out and I really didn’t like the output, so I sat down and wrote the awk script to format the 11gR2 output and added a variable at the top to choose which awk script. One of my colleagues saw it and liked the output, so I shared it; however, he asked if I could simplify the installation, as at that time you had to set the location of the script, whether it was 10g or 11g, and the location of the CRS_HOME. Finally, I sat down and decided to make it as universal and easy to use, as I could by adding the awk scripts as variables within the ksh script and adding a section to attempt to determine the CRS or HAS home automatically. It seems to work fairly well and several of my colleagues have been using it, so I thought I would share it more universally. Hope everyone finds it useful!
Read more

Table Functions: 9i Feature Worth Revisiting (Part 2)

In part I of this series I showed a simple example of how table functions can be used to treat PL/SQL collection types as tables in a SQL query. We used a nested table collection type to which we manually added a few entries. In this example, I intend to show a more practical use of table functions using all three PL/SQL collection types: associative arrays, nested tables, and VARRAYs.

Read more

Sizing up your environment for backups to tape

So, you just spent many long hours putting together your latest and greatest creation. Hot off the presses, you have a true work of art. Now what? You are being audited on backups? You realize that there was no money for backup gear in the budget, but having this kind of risk over your head can only be viewed as something to offload to an over-taxed shared services IT department at your org or a chance to play russian roullette with your data? No! It’s time to get an accurate assessment of what you need to backup and get your tape requirements sorted out so that you can present a case for your backups and any needed gear to get the job done. Don’t worry, you may not have to spend 2 trillion dollars on gear (unless you work for the government), especially if you already have a working environment to do your existing backups and just need to find/make capacity on the tape SAN. That being said, you still need to know what you’ll need to do or ask for and how can you do that if you just go to upper management and say, ” I just got an Exadata and I need at least 48×365 = 17520 TB of capacity in my tape backup gear to make sure we are covered for the year. Let’s not be greedy. Even the most novice executive will question the costs needed to satisfy that kind of requirement.

Making the assumption that you are using RMAN, you have already configured your backups to run a weekly full on each database to tape once per week, incrementals daily, and a reasonable amount of archivelogs being generated daily to the Fast Recovery Area, we can proceed to estimate your immediate needs.

If you are running Oracle and you are using RMAN, you can get the sizing numbers you need for your tape requirements somewhat easily if you are already taking backups to your Fast Recovery Area. There are a number of different views that can provide information about your backups, but I have found the most beneficial information can be gathered from the views used here. The helpful script that follows can be run to collect data on all the different types of backups you are currently running and the sizes (this script allows you to feed a list of SIDs):

#!/bin/bash

. /home/oracle/.bash_profile

ORAENV_ASK=NO

for DB in `echo “$DB_LIST”`; do
export ORACLE_SID=$DB
. oraenv $DB >/dev/null

echo “Database: $DB”

sqlplus -s \/ as sysdba<<EOF
set serveroutput on
set linesize 150
set pagesize 300

select ctime “Date”,

decode(backup_type, ‘L’, ‘Archive Log’, ‘D’, ‘Full’, ‘Incremental’) backup_type,bsize “Size GB” from (select trunc(bp.completion_time) ctime, backup_type, round(sum(bp.bytes/1024/1024/1024),2) bsize

from v\$backup_set bs,

v\$backup_piece bp where bs.set_stamp = bp.set_stamp and bs.set_count  = bp.set_count and bp.status = ‘A’ group by trunc(bp.completion_time), backup_type) order by 1, 2;

exit;
EOF
done

You can then look at the history of your current backups and right-size your tape requirements.

You might also check how many archive logs you are generating in a week under full production load:

 

col gbytes form 999,999.99

select sum(blocks*block_size)/1024/1024/1024 gbytes

from v$archived_log

where next_time > sysdate-8;

 

Some things to consider when you decide on your tape solution:

The type of tape drives

Encryption needs

The amount of compression

The number of tape drives

The type of backup software (OSB vs Netbackup vs Tivoli)

Your vaulting schedule (number of available media slots)

How fast can your channel send the data to the tape drives

Assuming you already have a tape SAN and you are not given enough money in your budget or enough lead time to get the gear in place before an audit, you might consider looking at what you have now in place and checking to see if you are doing too many backups, you have tuned your backups incorrectly, or if you have a number of backups that would benefit from VTL or dedup technologies in order to free space on your existing tape infrastructure.

 

 

 

 

Next Page »