Convert Oracle Auto Login Only Wallet to PKCS12

I had trouble finding any notes on converting an Oracle auto login only wallet (cwallet.sso) into a PKCS12 format that can be used by other webservers such as Apache (e.g. ewallet.p12).

This may not be the most efficient method but it worked for me – first use orapki utility to convert to a Java Keystore e.g.

orapki wallet pkcs12_to_jks -wallet cwallet.sso -jksKeyStoreLoc ewallet.jks

Obviously choose a secure password when prompted.

Then use the Java keytool utility to convert the keystore into an ewallet.p12 e.g.

keytool -importkeystore -srckeystore ewallet.jks -srcstoretype JKS -deststoretype PKCS12 -destkeystore ewallet.p12

You’ll need the password you chose for your Java Keystore and to provide a new one for you PKCS12 wallet.

Advertisements

Latch free waits, Dynamic Statistics, and the Result Cache

2016-11-29 EDIT: Oracle has announced that in 12.2 the features that drove this overuse of the result cache (dynamic stats triggered by SPDs) will default to off, and provided patches to apply to 1…

Source: Latch free waits, Dynamic Statistics, and the Result Cache

Oracle RDBMS 12.1.0.1 SE ACL Performance

We have come across an issue with Oracle RDBMS 12.1.0.1 Standard Edition (SE) that impacts the performance all requests to external network resource from the database, e.g. calls to webservices. The issue appears to be due to a dependency in the Oracle code on the Result Cache (a feature that is unavailable in SE).

There is a simple test case that highlights the problem i.e.

set timing on
declare
http_request utl_http.req;
http_response utl_http.resp;
text varchar2(32767);
i number;
begin
i := 0;
while i < 1000
loop
http_request := utl_http.begin_request('http://localhost:8087/index.html&#039;);
http_response := utl_http.get_response(http_request);
utl_http.read_text(http_response, text, 32766);
utl_http.end_response(http_response);
i := i + 1;
end loop;
end;
/

You will need to use a URL that is accessible from your database, and create an appropriate ACL to give you access to it.

If you run that code in a 12.1.0.1 SE database you will see an elapsed time of several minutes e.g.

12.1.0.1 SE
Elapsed: 00:03:26.81

On further investigation it should be clear that it is this internal Oracle routine that’s at fault

SQL HASH_VALUE EXECUTIONS Seconds Sec/Exec
-------------------------------------------------------------------------------- ---------- ---------- ------------ ---------------
begin :1 := dbms_network_acl_admin.get_host_aclids(:2,:3); end; 2185038870 2000 201.29 0.10064

The same test case executed in either 12.1.0.1 Enterprise Edition or 12.1.0.2 SE2 will run much faster e.g.

12.1.0.2 SE2
Elapsed: 00:00:03.23

The CPU time spent on the internal Oracle routine will also be much lower i.e.

SQL HASH_VALUE EXECUTIONS Seconds Sec/Exec
-------------------------------------------------------------------------------- ---------- ---------- ------------ ---------------
begin :1 := dbms_network_acl_admin.get_host_aclids(:2,:3); end; 2185038870 2000 0.71 0.00036

We believe this is because the code behind dbms_network_acl_admin.get_host_aclids in 12.1.0.1 SE is dependent on the Result Cache.

The Result Cache is a feature Oracle have disabled in SE and hence the performance of the internal Oracle routine is impacted.

An SR has been logged with Oracle who have agreed it’s due to the Result Cache but have yet to offer a solution.

Apex Schema Optimizer Statistics

Providing the Oracle optimizer with accurate statistics on the Apex schema tables can be challenging.

Many of tables tracking Apex session state information see a huge turnover in data such that statistics gathered overnight can quickly become stale resulting in poor choice of execution plan by the optimizer and seriously degraded performance.

There are several ways to tackle the issue e.g. locking the stats on some of the tables to values that are known to perform reliably.

However I believe it’s better to give the optimizer the most accurate information you can, so it can make more informed decisions based on the actual data in the tables.

The problem is that even if you refresh the optimizer statistics often, you need to ensure that an execution plans that are stored in the shared_pool are also cleared or you risk still using a plan that does not reflect those new statistics.

Fortunately you can achieve all this with one relatively straight forward job i.e.

conn / as sysdba
variable v_jobno number;
begin
dbms_job.submit(
:v_jobno,
'dbms_stats.gather_schema_stats(ownname=>''APEX_040200'',options=>''GATHER STALE'',cascade=>true,no_invalidate=>false);',
sysdate+(5/(24*60)),
'sysdate+(5/(24*60))',
TRUE);
commit;
end;
/

This will create a job that runs every five minutes and refresh statistics on any Apex tables that are marked as stale, but also thanks to the no_invalidate option, it will clear any plans from the shared_pool that would be affected by the new statistics gathered.

You should make sure the schema reflects your Apex version, but with this job in place you should have no worries about stale statistics on the Apex session state tables impacting your application performance.

Statspack Purge Job

Adding a job to take hourly statspack snapshots has always been simple e.g.
variable v_jobno number;
begin
dbms_job.submit(:v_jobno,
'statspack.snap(i_snap_level=>7);',
trunc(sysdate+1/24,'HH'),
'trunc(sysdate+1/24,''HH'')',
TRUE);
commit;
end;
/
Purging snapshots automatically used to be a little more difficult, fortunately with more recent versions of statspack you can now do e.g.

variable v_jobno number;
begin
dbms_job.submit(:v_jobno,
'statspack.purge(i_num_days=>;14,i_extended_purge=>;TRUE);',
trunc(sysdate+1),
'trunc(sysdate+1)',
TRUE);
commit;
end;
/
This creates a dbms_job that runs every day at midnight purging any snapshots older than 14 days.

Oracle 11gAS Reports Server

The Oracle 11gAS Reports Server is a fragile thing, but recently one particular error was cropping up regularly with (usually) large reports.

The error message reported was REP-50125 with the usual meaningless gibberish i.e.

[2013-12-11T02:51:33.442+00:00] [reports] [INCIDENT_ERROR] [REP-50125] [oracle.reports.server] [tid: 16] REP-50125 : org.omg.CORBA.COMM_FAILURE: vmcid: SUN minor code: 208

Which has lots of unhelpful hits on the Oracle Support Site – suggesting it’s just a common generic error message.

As it was large reports that were mostly the problem a resource issue was the likely culprit, hence the obvious first guess is to increase the maximum Java heap size by setting the JVM Options for the reports server to e.g.

-Xmx4096M

This had no effect unfortunately so we turned to the maximum Java stack size – which on 64bit Java 6 defaults to 1024K.

Increasing this by setting JVM Options to e.g.

-Xss4096K

did the trick – the reports now completed successfully.

OPMN CPU Usage with Oracle Forms

We’re using Oracle Weblogic 10.3.5.0 and Oracle Forms & Reports 11g R2 11.1.2.0 and have noticed that the cpu usage of the opmn process increases the more concurrent Oracle Forms users you have, to the point where it’s constantly hogging a single cpu with more than a few hundred users connected.

Running a truss on the opmn process indicates that it’s interrogating the /proc filesystem for every Forms Runtime process e.g.

4934/16: open("/proc/23356/xmap", O_RDONLY) = 14
4934/16: fcntl(14, F_SETFD, 0x00000001) = 0
4934/16: fstat(14, 0xFFFFFFFF77BFAE58) = 0
4934/16: pread(14, "01".., 28944, 0) = 28944
4934/16: pread(14, "01".., 43416, 0) = 43416
4934/16: pread(14, "01".., 57888, 0) = 57888
4934/16: pread(14, "01".., 72360, 0) = 72360
4934/16: pread(14, "01".., 86832, 0) = 85968
4934/16: close(14) = 0
4934/16: open("/proc/23356/psinfo", O_RDONLY) = 14
4934/16: fcntl(14, F_SETFD, 0x00000001) = 0
4934/16: read(14, "0201 [ <".., O_RDONLY) = 14
4934/16: fcntl(14, F_SETFD, 0x00000001) = 0
4934/16: read(14, "01".., 504) = 504
4934/16: close(14) = 0
4934/16: open("/proc/23356/status", O_RDONLY) = 14
4934/16: fcntl(14, F_SETFD, 0x00000001) = 0
4934/16: read(14, "t @ 001 [ <".., 1776) = 1776
4934/16: close(14) = 0

Similarly running an

$ORACLE_HOME/opmn/bin/opmnctl status

includes every single Forms Runtime process in its status list, and can be very slow to return.

That doesn’t sound like it’s going to be particularly scalable and was certainly causing us problems – but there’s very little in the way of documentation about it.

What we found was that removing the section

from the opmn.xml file in

$MW_HOME/asinst_1/config/OPMN/opmn

and then forcing a reload with

$ORACLE_HOME/opmn/bin/opmnctl reload

stopped the opmn process from monitoring the /proc fileystem for Forms Runtime processes, and the status command to just list the other components.

The CPU usage of opmn the process has now dropped dramatically, and the 11gAS web console is much more responsive than it was.

Not sure what other impact this change might have, although at this stage everything seems to be working as normal, so if you decide to use the change yourself make sure you test it out thoroughly first.