Convert Oracle Auto Login Only Wallet to PKCS12

I had trouble finding any notes on converting an Oracle auto login only wallet (cwallet.sso) into a PKCS12 format that can be used by other webservers such as Apache (e.g. ewallet.p12).

This may not be the most efficient method but it worked for me – first use orapki utility to convert to a Java Keystore e.g.

orapki wallet pkcs12_to_jks -wallet cwallet.sso -jksKeyStoreLoc ewallet.jks

Obviously choose a secure password when prompted.

Then use the Java keytool utility to convert the keystore into an ewallet.p12 e.g.

keytool -importkeystore -srckeystore ewallet.jks -srcstoretype JKS -deststoretype PKCS12 -destkeystore ewallet.p12

You’ll need the password you chose for your Java Keystore and to provide a new one for you PKCS12 wallet.

Advertisements

Latch free waits, Dynamic Statistics, and the Result Cache

2016-11-29 EDIT: Oracle has announced that in 12.2 the features that drove this overuse of the result cache (dynamic stats triggered by SPDs) will default to off, and provided patches to apply to 1…

Source: Latch free waits, Dynamic Statistics, and the Result Cache

Oracle RDBMS 12.1.0.1 SE ACL Performance

We have come across an issue with Oracle RDBMS 12.1.0.1 Standard Edition (SE) that impacts the performance all requests to external network resource from the database, e.g. calls to webservices. The issue appears to be due to a dependency in the Oracle code on the Result Cache (a feature that is unavailable in SE).

There is a simple test case that highlights the problem i.e.

set timing on
declare
http_request utl_http.req;
http_response utl_http.resp;
text varchar2(32767);
i number;
begin
i := 0;
while i < 1000
loop
http_request := utl_http.begin_request('http://localhost:8087/index.html&#039;);
http_response := utl_http.get_response(http_request);
utl_http.read_text(http_response, text, 32766);
utl_http.end_response(http_response);
i := i + 1;
end loop;
end;
/

You will need to use a URL that is accessible from your database, and create an appropriate ACL to give you access to it.

If you run that code in a 12.1.0.1 SE database you will see an elapsed time of several minutes e.g.

12.1.0.1 SE
Elapsed: 00:03:26.81

On further investigation it should be clear that it is this internal Oracle routine that’s at fault

SQL HASH_VALUE EXECUTIONS Seconds Sec/Exec
-------------------------------------------------------------------------------- ---------- ---------- ------------ ---------------
begin :1 := dbms_network_acl_admin.get_host_aclids(:2,:3); end; 2185038870 2000 201.29 0.10064

The same test case executed in either 12.1.0.1 Enterprise Edition or 12.1.0.2 SE2 will run much faster e.g.

12.1.0.2 SE2
Elapsed: 00:00:03.23

The CPU time spent on the internal Oracle routine will also be much lower i.e.

SQL HASH_VALUE EXECUTIONS Seconds Sec/Exec
-------------------------------------------------------------------------------- ---------- ---------- ------------ ---------------
begin :1 := dbms_network_acl_admin.get_host_aclids(:2,:3); end; 2185038870 2000 0.71 0.00036

We believe this is because the code behind dbms_network_acl_admin.get_host_aclids in 12.1.0.1 SE is dependent on the Result Cache.

The Result Cache is a feature Oracle have disabled in SE and hence the performance of the internal Oracle routine is impacted.

An SR has been logged with Oracle who have agreed it’s due to the Result Cache but have yet to offer a solution.

Apex Schema Optimizer Statistics

Providing the Oracle optimizer with accurate statistics on the Apex schema tables can be challenging.

Many of tables tracking Apex session state information see a huge turnover in data such that statistics gathered overnight can quickly become stale resulting in poor choice of execution plan by the optimizer and seriously degraded performance.

There are several ways to tackle the issue e.g. locking the stats on some of the tables to values that are known to perform reliably.

However I believe it’s better to give the optimizer the most accurate information you can, so it can make more informed decisions based on the actual data in the tables.

The problem is that even if you refresh the optimizer statistics often, you need to ensure that an execution plans that are stored in the shared_pool are also cleared or you risk still using a plan that does not reflect those new statistics.

Fortunately you can achieve all this with one relatively straight forward job i.e.

conn / as sysdba
variable v_jobno number;
begin
dbms_job.submit(
:v_jobno,
'dbms_stats.gather_schema_stats(ownname=>''APEX_040200'',options=>''GATHER STALE'',cascade=>true,no_invalidate=>false);',
sysdate+(5/(24*60)),
'sysdate+(5/(24*60))',
TRUE);
commit;
end;
/

This will create a job that runs every five minutes and refresh statistics on any Apex tables that are marked as stale, but also thanks to the no_invalidate option, it will clear any plans from the shared_pool that would be affected by the new statistics gathered.

You should make sure the schema reflects your Apex version, but with this job in place you should have no worries about stale statistics on the Apex session state tables impacting your application performance.

Statspack Purge Job

Adding a job to take hourly statspack snapshots has always been simple e.g.
variable v_jobno number;
begin
dbms_job.submit(:v_jobno,
'statspack.snap(i_snap_level=>7);',
trunc(sysdate+1/24,'HH'),
'trunc(sysdate+1/24,''HH'')',
TRUE);
commit;
end;
/
Purging snapshots automatically used to be a little more difficult, fortunately with more recent versions of statspack you can now do e.g.

variable v_jobno number;
begin
dbms_job.submit(:v_jobno,
'statspack.purge(i_num_days=>;14,i_extended_purge=>;TRUE);',
trunc(sysdate+1),
'trunc(sysdate+1)',
TRUE);
commit;
end;
/
This creates a dbms_job that runs every day at midnight purging any snapshots older than 14 days.