Archive

Archive for December, 2011

Running Oracle database inside Amazon Cloud

December 30, 2011 Leave a comment

Databases in Cloud  Patrick Schwanke

Lots of people talk about cloud nowadays. Working with Oracle databases for about 9 years, I wanted to see how Oracle database would work inside Amazon Web Services cloud and how to get this running from the scratch.

During the next days I will post some experiences and stumbling blocks and other howto related stuff:

  • How to appropriately set up your AWS account and environment: Step 1
  • Amazon Cloud Basics for Oracle DBAs: Step 2
  • How to create an Oracle instance: Step 3
  • How to access my Oracle instance in the cloud (SSH, RDP, File Transfer, Listener, SQL*Plus): Step 4
  • How to do some very useful automation / scripting
  • How about performance in the cloud?
  • How to do backups inside the cloud or into the cloud

Related Articles:

Monitoring Oracle databases running on Amazon EC2

Monitoring Oracle databases running on Amazon RDS

Source

Links:

Databases in Cloud

Amazon AWS

Advertisements

Daily Check List

December 27, 2011 Leave a comment

Daily Procedures:

1. Verify all instances are up or not

Make sure the database is available. Log into each instance and run daily reports or test scripts. Some sites may wish to automate this.

Optional implementation: use Oracle Enterprise Manager’s ‘probe’ event.

2. Check for new entries in alert log file

• Connect to each managed system.
• Use ‘telnet’ or comparable program.
• For each managed instance, go to the background dump destination, usually $ORACLE_BASE/<SID>/bdump.

Make sure to look under each managed database’s SID.
• At the prompt, use the Unix ‘tail’ command to see the alert_<SID>.log, or otherwise examine the most recent entries in the file.
• If any ORA-errors have appeared since the previous time you looked, note them in the Database Recovery Log and investigate each one. The recovery log is in <file>.

3. Verify DBSNMP is running

1. Log on to each managed machine to check for the ‘dbsnmp’ process.
For Unix: at the command line, type ps –ef | grep dbsnmp. There should be two dbsnmp processes running. If not, restart DBSNMP. (Some sites have this disabled on purpose; if this is the case, remove this item from your list, or change it to “verify that DBSNMP is NOT running”.)

4. Verify success of database backup

5. Verify success of database archiving to tape

6. Verify enough resources for acceptable performance

6.1 Verify free space in tablespaces.

For each instance, verify that enough free space exists in each tablespace to handle the day’s expected growth. As of <date>, the minimum free space for <repeat for each tablespace>: [ < tablespace > is < amount > ]. When incoming data is stable, and average daily growth can be calculated, then the minimum free space should be at least <time to order, get, and install more disks> days’ data growth.

a) Go to each instance, run free.sql to check free mb in tablespaces.
Compare to the minimum free MB for that tablespace. Note any low-space conditions and correct.

free.sql

select  tablespace_name as name,
sum ( blocks ) as free_blk,
trunc ( sum ( bytes ) / (1024*1024) ) as free_m,
max ( bytes ) / (1024) as big_chunk_k,
count (*) as num_chunks
from   dba_free_space
group by tablespace_name;

Output Example:


b) Go to each instance, run space.sql to check percentage free in tablespaces.
Compare to the minimum percent free for that tablespace. Note any low-space conditions and correct.

6.2 Verify rollback segment

Status should be ONLINE, not OFFLINE or FULL, except in some cases you may have a special rollback segment for large batch jobs whose normal status is OFFLINE.

a) Optional: each database may have a list of rollback segment names and their expected statuses.
b) For current status of each ONLINE or FULL rollback segment (by ID not by name), query on V$ROLLSTAT.
c) For storage parameters and names of ALL rollback segment, query on DBA_ROLLBACK_SEGS. That view’s STATUS field is less accurate than V$ROLLSTAT, however, as it lacks the PENDING OFFLINE and FULL statuses, showing these as OFFLINE and ONLINE respectively.

6.3 Identify bad growth projections.

Look for segments in the database that are running out of resources (e.g. extents) or growing at an excessive rate. The storage parameters of these segments may need to be adjusted. For example, if any object reached 200 as the number of current extents, AND it’s an object that is supposed to get large, upgrade the max_extents to unlimited.

a) To gather daily sizing information, run analyze5pct.sql. If you are collecting nightly volumetrics, skip this step.
b) To check current extents, run nr_extents.sql
c) Query current table sizing information
d) Query current index sizing information
e) Query growth trends

6.4 Identify space-bound objects.

Space-bound objects’ next_extents are bigger than the largest extent that the tablespace can offer. Space-bound objects can harm database operation. If we get such object, first need to investigate the situation. Then we can use ALTER TABLESPACE <tablespace> COALESCE. Or add another datafile.

a) Run spacebound.sql. If all is well, zero rows will be returned.

6.5  Processes to review contention for CPU, memory, network or disk resources.

a) To check CPU utilization, go to x:\web\phase2\default.htm =>system metrics=>CPU utilization page. 400 is the maximum CPU utilization because there are 4 CPUs on phxdev and phxprd machine. We need to investigate if CPU utilization keeps above 350 for a while.

7. Copy Archived Logs to Standby Database and Roll Forward

If you have a Standby Database, copy the appropriate Archived Logs to the expected location on the standby machine and apply those logs (roll forward the changes) to the standby database. This keeps the standby database up-to-date.

The copying of logs, the applying of them, or both, can in some cases be automated. If you have automated them, then your daily task should be to confirm that this happened correctly each day.

8. Read DBA manuals for one hour

Nothing is more valuable in the long run than that the DBA be as widely experienced, and as widely read, as possible. Readings should include DBA manuals, trade journals, and possibly newsgroups or mailing lists.

References:

  • Loney, Kevin Oracle8 DBA Handbook
  • Cook, David Database Management from Crisis to Confidence
  • Cox, Thomas B. The Database Administration Maturity Model

Automatic Workload Repository

December 14, 2011 Leave a comment

Oracle 10g provides the Automatic Workload Repository. The AWR is a repository of performance information collected by the database to aid
in the tuning process for DBAs. The main focus for the Oracle database in version 10g is self-manageability. AWR is the place where the data to aid in self-management is stored.

AWR Features

The AWR is used to collect performance statistics including:

  • Wait events used to identify performance problems.
  • Time model statistics indicating the amount of DB time associated with a process from the V$SESS_TIME_MODEL and V$SYS_TIME_MODEL views.
  • Active Session History (ASH) statistics from the V$ACTIVE_SESSION_HISTORY view.
  • Some system and session statistics from the V$SYSSTAT and V$SESSTAT views.
  • Object usage statistics.
  • Resource intensive SQL statements

Identifies the SQL statements that have the

  • largest CPU consumption
  • most buffer gets
  • disk reads
  • most parse calls
  • shared memory

The data is fed from MMON. The information is stored in the sysaux tablespace.

Several of the automatic database tuning features require information from the AWR to function correctly, including:

  • Automatic Database Diagnostic Monitor
  • SQL Tuning Advisor
  • Undo Advisor
  • Segment Advisor

Snapshots Interval and Retention Time

Check snapshots interval and retention time from DBA_HIST_WR_CONTROL.

 SELECT snap_interval, retention
 FROM dba_hist_wr_control;
 SNAP_INTERVAL RETENTION
-------------------- --------------------
+00000 00:30:00.0 +00007 00:00:00.0

They can be modified through DBMS_WORKLOAD_REPOSITORY. By default snapshots of the relevant data are taken every hour and retained for 7 days. The default values for these settings can be altered using the following procedure.

 BEGIN
 dbms_workload_repository.modify_snapshot_settings(
 interval => 60,
 retention => 10*24*60
 );
END;
/

Extra snapshots can be taken and existing snapshots can be removed, as shown below.

EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;
BEGIN
 DBMS_WORKLOAD_REPOSITORY.drop_snapshot_range (
 low_snap_id => 22,
 high_snap_id => 32);
END;
/

Snapshot information can be queried from the DBA_HIST_SNAPSHOT view.

Baselines

  • Pair of snapshots that represents a specific period of usage.
  • Compare current performance against similar periods in the past.
BEGIN
 DBMS_WORKLOAD_REPOSITORY.create_baseline (
 start_snap_id => 210,
 end_snap_id => 220,
 baseline_name => 'batch baseline');
END;
/

The pair of snapshots associated with a baseline are retained until the baseline is explicitly deleted.

BEGIN
 DBMS_WORKLOAD_REPOSITORY.drop_baseline (
 baseline_name => 'batch baseline',
 cascade => FALSE); -- Deletes associated snapshots if TRUE.
END;
/

Baseline information can be queried from the DBA_HIST_BASELINE view.

Automatic Workload Repository Reports

  • Generated in text or HTML
  • Database/Grid Control
  • Package DBMS_WORKLOAD_REPOSITORY
  • Scripts awrrpt.sql (current DBID and instance) or awrrpti.sql (DBID and instance are requested)
  • Very similar to Statspack reports
  • The script awrinfo.sql generates general information about AWR such as the size and data distribution
  •  awrrpti.sql allows you to select a single instance
@$ORACLE_HOME/rdbms/admin/awrrpt.sql
@$ORACLE_HOME/rdbms/admin/awrrpti.sql
The scripts prompt you to enter the report format (html or text), the start snapshot id, the end snapshot id and the report filename. The resulting report can be opend in a browser or text editor accordingly.
Time Model Statistics
  • High-level information about the used CPU and elapsed time for different database operations obtained from V$SYS_TIME_MODEL and V$SESS_TIME_MODEL views
  • Amount of time is given in microseconds
  • The statistic “DB time” represents the total time spent in database calls; this doesn’t include the time spent by background processes
  • Since waits are part of the elapsed time, “DB time” can be much larger that the elapsed time since instance startup
  • Some of the information provided by these views is already available in V$SYSTAT and V$SESSTA

Workload Repository Views

The following workload repository views are available:

  • V$ACTIVE_SESSION_HISTORY – Displays the active session history (ASH) sampled every second.
  • V$METRIC – Displays metric information.
  • V$METRICNAME – Displays the metrics associated with each metric group.
  • V$METRIC_HISTORY – Displays historical metrics.
  • V$METRICGROUP – Displays all metrics groups.
  • DBA_HIST_ACTIVE_SESS_HISTORY – Displays the history contents of the active session history.
  • DBA_HIST_BASELINE – Displays baseline information.
  • DBA_HIST_DATABASE_INSTANCE – Displays database environment information.
  • DBA_HIST_SNAPSHOT – Displays snapshot information.
  • DBA_HIST_SQL_PLAN – Displays SQL execution plans.
  • DBA_HIST_WR_CONTROL – Displays AWR settings.

AWR from Enterprise Manager

The automated workload repository administration tasks have been included in Enterprise Manager. The “Automatic Workload Repository” page is accessed from the main page by clicking on the “Administration” link, then the “Workload Repository” link under the “Workload” section. The page allows you to modify AWR settings or manage snapshots without using the PL/SQL APIs.

Source / Reference Links:

trivadis.com

OracleBase

OracleUtilities.com

Jonathan Lewis

Monitor Oracle Scheduler Jobs

December 9, 2011 Leave a comment

Job status and run count:

select
 job_name, owner,
 nvl(to_char(next_run_date, 'DD-MON-YYYY HH24:MI:SS'),
 schedule_name) as next_run_date,
 to_char(last_start_date, 'DD-MON-YYYY HH24:MI:SS') as last_run_date,
 job_class, run_count
from dba_scheduler_jobs;

Reference Links: