Quantcast
Channel: Oracle Blogs | Oracle The Shorten Spot (@theshortenspot) Blog
Viewing all 311 articles
Browse latest View live

Security Rollups for CCB/OUAF

$
0
0

The Oracle Utilities Customer Care And Billing and Oracle Utilities Application Framework, ship security rollups on a regular basis (especially for older releases of the products). These patch sets contain all the security patches in a small number of downloads (one for the CCB product and one for OUAF product). Product other than CCB , can install the OUAF patch sets to take advantage of the rollup.

The following rollups are available from support.oracle.com:

CCB VersionPatch NumberOUAF VersionPatch Number 2.3.1 27411229 2.2.0 26645120 2.4.0.1 27380195 4.2.0.1.0 26645171 2.4.0.2 27380216 4.2.0.2.0 26645183 2.4.0.3 27380238 4.2.0.3.0 26645095 2.5.0.1 27380273 4.3.0.1.0 26645209

For more information refer to the individual patches. For newer releases not listed, the patches are already included in the base releases so no additional effort is required.


Clarification of XAI, MPL and IWS

$
0
0

A few years ago, we announced that XML Application Integration (XAI) and the Multipurpose Listener (MPL) were being retired from the product and replaced with Inbound Web Services (IWS) and Oracle Service Bus (OSB) Adapters.

In the next service pack of the Oracle Utilities Application Framework, XAI and MPL will finally be removed from the product.

The following applies to this:

  • The MPL software and XAI Servlet will be removed from the code. This is the final step in the retirement process. The tables associated with XAI and MPL will not be removed from the product for backward compatibility with newer adapters. Maintenance functions that will be retained will be prefixed with Message rather than XAI. Menu items not retained will be disabled by default. Refer to release notes of service packs (latest and past) for details of the menu item changes.
  • Customers using XAI should migrate to Inbound Web Services using the following guidelines:
    • XAI Services using the legacy Base and CorDaptix adapters will be automatically migrated to Inbound Web Services. These services will be auto-deployed using the Inbound Web Services Deployment online screen or iwsdeploy utility.
    • XAI Services using the Business adapter (sic) can either migrate their definitions manually to Inbound Web Services or use a technique similar to the technique outlined in Converting your XAI Services to IWS using scripting. Partners should take the opportunity to rationalize their number of web services using the multi-operation capability in Inbound Web Services.
    • XAI Services using any other adapter than those listed above are not migrate-able as they are typically internal services for use with the MPL.
  • Customers using the Multi-purpose Listener should migrate to Oracle Service Bus with the relevant adapters installed.

There are a key number of whitepapers that can assist in this process:

Managing Your Environments

$
0
0

With the advent of easier and easier techniques for creating and maintaining Oracle Utilities environments, the number of environment will start to grow, increasing costs and introducing more risk into a project. This applies to on-premise as well as cloud implementations, though the cloud implementations have more visible costs.

An environment is a copy of the Oracle Utilities product (one software and one database at a minimum).

To minimize your costs and optimize the number of environments to manage there are a few techniques that may come in handy:

  • Each Environment Must Be On Your Plan - Environments are typically used to support an activity or group of activities on some implementation plan. If the environment does not support any activities on a plan then it should be questioned.
  • Each Environment Must Have An Owner - When I started working in IT a long time ago, the CIO of the company I worked for noticed the company had over 1500 IT systems. To rationalize he suggested shutting them all down and seeing who screamed to have it back on. That way he could figure out what was important to what part of the business. While this technique is extreme it points out an interesting thought. If you can identify the owner of each environment then that owner is responsible to determine the life of that environment including its availability or performance. Consider removing environments not owned by anyone.
  • Each Environment Should Have a Birth Date And End Date - As an  extension to the first point, each environment should have a date it is needed and a date when it is no longer needed. It is possible to have an environment be perpetual, for example Production, but generally environments are needed for a particular time frame. For example, you might be creating environments to support progressive builds, where you would have a window of builds you might want to keep (a minimum set I hope). That would dictate the life-cycle of the environment. This is very common in cloud environments where you can reserve capacity dynamically so it can impose time limits to enforce regular reassessment.
  • Reuse Environments - I have been on implementations where individual users wanted their own personal environments. While this can be valid in some situations, it is much better to encourage reuse of environments across users and across activities. If you can plan out your implementation you can identify how to best reuse environments to save time and costs.
  • Ask Questions; Don't Assume - When agreeing to create and manage the environment, ask the above questions and more to ensure that the environment is needed and will support the project appropriately for the right amount of time. I have been on implementations where 60 environments existed initially and after applying these techniques and others was able to reduce it to around 20. That saved a lot of costs.

So why the emphasis on keeping your environments to a minimal number given the techniques for building and managing them are getting easier? Well, no matter how easy keeping an environment consumes resources (computer and people) and keeping them at a minimum keeps costs minimized.

The techniques outlined above apply to Oracle Utilities products but can be applied to other products with appropriate variations.

For additional advice on this topic, refer to the Software Configuration Management Series(Doc Id: 560401.1) whitepapers available from My Oracle Support.

Capacity Planning Connections

$
0
0

Customers and partners regularly ask me questions around capacity of traffic on their Oracle Utilities products implementations and how to best handle their expected volumes.

The key to answering this question is to under a number of key concepts:

  • Capacity is related to the number of users, threads etc, lets call them actors to be generic, are actively using the system. As the Oracle Utilities Application Framework is stateless, then actorsonly consume resources when they are active on any part of the architecture. If they are idle then they are not consuming resources. This is important as the number of logged on users does not dictate capacity.
  • The goal of capacity is to have enough resource to handle peak loads and to minimize capacity when the load drops to the minimum expected. This makes sure you have enough for the busy times but also you are not wasting resource.
  • Capacity is not just online users it is also batch threads, Web Service Clients, REST clients and mobile clients (for mobile application interfaces). It is a combination for each channel. Each channel can be monitored individually to determine capacity for each channel.

This is the advice I tend to give customers who want to monitor capacity:

  • For channels using Oracle WebLogic you want to use Oracle WebLogic Mbeans such as ThreadPoolRuntimeMbean (using ExecuteThreads) for protocol level monitoring. If you want to monitor each server individually to get an idea of capacity then you might want to try ServerChannelRuntimeMBean (using ConnectionsCount). In the latter case, look at each channel individually to see what your traffic looks like.
  • For Batch, when using it with Oracle Coherence, then use the inbuilt Batch monitoring API (via JMX) and use the sum of NumberOfMembers attribute to determine the active number of threads etc running in your cluster. Refer to the Server Administration Guide shipped with the Oracle Utilities product for details of this metric and how to collect it.
  • For database connections, it is more complex as connection pools (regardless of the technique used) rely on a maximum size limit. If this limit is exceeded then you want to know of how many pending requests are waiting to detect how bigger the pool should be. The calculations are as follows:

Note: You might notice that the database active connections are actually calculations. This is due to the fact that the metrics capture the capacity within a limit and need to take into account when the limit is reached and has waiting requests.

The above metrics should be collected at peak and non-peak times. This can be done manually or using Oracle Enterprise Manager.

Once the data is collected it is recommended to be used for the following:

  • Connection Pool Sizes– The connection pools should be sized using the minimum values experienced and with the maximum values with some tolerances for growth.
  • Number of Servers to setup– For each channel, determine the number of servers based upon the numbers and the capacity on each server. Typically at a minimum of two servers should be setup for the minimum high availability solutions. Refer to Oracle Maximum Availability Architecture for more advice.

Why the XAI Staging is not in the OSB Adapters?

$
0
0

With the replacement of the Multi-Purpose Listener (MPL) with the Oracle Service Bus (OSB) with the additional OSB Adapters for Oracle Utilities Application Framework based products, customers have asked about transaction staging support.

One of the most common questions I have received is why there is an absence of an OSB Adapter for the XAI Staging table. Let me explain the logic.

  • One Pass versus Two Passes. The MPL processed its integration by placing the payload from the integration into the XAI Staging table. The MPL would then process the payload in a second pass. The staging record would be marked as complete or error. The complete ones would need to be removed using the XAI Staging purge process run separately. You then used XAI Staging portals to correct the data coming in for ones in error. On the other hand, the OSB Adapters treat the product as a "black box" (i,e, like a product) and it directly calls the relevant service directly (for inbound) and polls the relevant Outbound or NDS table for outbound processing records directly. This is a single pass process rather than multiple that MPL did. OSB is far more efficient and scalable than the MPL because of this.
  • Error Hospital. The idea behind the XAI Staging is that error records remain in there for possible correction and reprocessing. This was a feature of MPL. In the OSB world, if a process fails for any reason, the OSB can be configured to act as an Error Hospital. This is effectively the same as the MPL except you can configure the hospital to ignore any successful executions which reduces storage. In fact, OSB has features where you can detect errors anywhere in the process and allows you to determine which part of the integration was at fault in a more user friendly manner. OSB effectively already includes the staging functionality so adding this to the adapters just duplicates processing. The only difference is that error correction, if necessary, is done within the OSB rather than the product.
  • More flexible integration model. One of the major reasons to move from the MPL to the OSB is the role that the product plays in integration. If you look at the MPL model, any data that was passed to the product from an external source was automatically the responsibility of the product (that is how most partners implemented it). This means the source system had no responsibility for the cleanliness of their data as you had the means of correcting the data as it entered the system. The source system could send bad data over and over and as you dealt with it in the staging area that would increase costs on the target system. This is not ideal. In the OSB world, you can choose your model. You can continue to use the Error Hospital to keep correcting the data if you wish or you can configure the Error Hospital to compile the errors and send them back, using any adapter, to the source system for correction. With OSB there is a choice, MPL did not really give you a choice.

With these considerations in place it was not efficient to add an XAI Staging Adapter to OSB as it would duplicate effort and decrease efficiency which negatively impacts scalability.

EMEA Edge Conference 2018

$
0
0

I will be attending the EMEA Oracle Utilities Edge Conference on the 26 - 27 June 2018 in the Oracle London office. This year we are running an extended set of technical sessions around on-premise and the Oracle Utilities Cloud Services. This forum is open to Oracle Utilities customers and Oracle Utilities partners.

The sessions mirror the technical sessions for the conference in the USA held earlier this year with the following topics:

Reducing Your Storage Costs Using Information Life-cycle Management With the increasing costs of maintaining storage and satisfying business data retention rules can be challenging. Using Oracle Information Life-cycle Management solution can help simplify your storage solution and hardness the power of the hardware and software to reduce storage costs. Integration using Inbound Web Services and REST with Oracle Utilities Integration is a critical part of any implementation. The Oracle Utilities Application Framework has a range of facilities for integrating from and to other applications. This session will highlight all the facilities and where they are best suited to be used. Optimizing Your Implementation Implementations have a wide range of techniques available to implement successfully. This session will highlight a group of techniques that have been used by partners and our cloud implementations to reduce Total Cost Of Ownership. Testing Your On-Premise and Cloud Implementations Our Oracle Testing solution is popular with on premise implementations. This session will outline the current testing solution as well as outline our future plans for both on premise and in the cloud. Securing Your Implementations With the increase in cybersecurity and privacy concerns in the industry, a number of key security enhancements have made available in the product to support simple or complex security setups for on premise and cloud implementations. Turbocharge Your Oracle Utilities
Product Using the Oracle In-Memory Database Option
The Oracle Database In-Memory options allows for both OLTP and Analytics to run much faster using advanced techniques. This session will outline the capability and how it can be used in existing on premise implementations to provide superior performance. Developing Extensions using Groovy Groovy has been added as a supported language for on premise and cloud implementations. This session outlines that way that Groovy can be used in building extensions. Note: This session will be very technical in nature. Ask Us Anything Session Interaction with the customer and partner community is key to the Oracle Utilities product lines. This interactive sessions allows you (the customers and partners) to ask technical resources within Oracle Utilities questions you would like answered. The session will also allow Oracle Utilities to discuss directions and poll the audience on key initiatives to help plan road maps

Note: These sessions are not recorded or materials distributed outside this forum.

This year we have decided to not only discuss capabilities but also give an idea of how we use those facilities in our own cloud implementations to reduce our operating costs for you to use as a template for on-premise and hybrid implementations.

See you there if you are attending.

If you wish to attend, contact your Oracle Utilities local sales representative for details of the forum and the registration process.

Data Management with Oracle Utilities products

$
0
0

One of the most common questions I receive is about how to manage data volumes in the Oracle Utilities products. The Oracle Utilities products are designed to scale no matter how much data is present in the database but obviously storage costs and management of large amounts of data is not optimal.

A few years ago we adopted the Information Lifecycle Management (ILM) capabilities of the Oracle Database as well as developed a unique spin on the management of data. Like biological life, data has a lifecycle. It is born when it is created, it has an active life while the business uses or manipulates it, it goes into retirement but is still accessible and eventually it dies when it is physically removed from the database. The length of that lifecycle will vary from data type to data type, implementation to implementation. The length of the life is dictated by its relevance to the business, company policies and even legal or government legislation.

The data management (ILM) capabilities of Oracle Utilities take this into account:

  • Data Retention Configuration. The business configures how long the active life of the individual data types are for their business. This defines what is called the Active Period. This is when the data needs to be in the database and accessible to the business for update and active use in their business.
  • ILM Eligibility Rules. Once the data retention period is reached, before the data can enter retirement, the system needs to know that anything outstanding, from a business perspective, has been completed. This is the major difference with most data management approaches. I hear DBA's saying that they would just rather the data was deleted after a specific period. Whilst that would cover most situations it would not cover a situation where the business is not finished with the data. Lets explain with an example. In CCB customers are billed and you can also record complains against a bill if their is a dispute. Depending on the business rules and legal processes an old bill may be in dispute. You should not remove anything related to that bill till the complaint is resolved, regardless of the age. Legal issues can be drawn out for lots of reasons. If you use a retention rule only then the data used in the complaint would potentially be lost. In the same situation, the base ILM Eligbility rules would detect something outstanding and bypass the applicable records. Remember these rules are protecting the business and ensuring that the ILM solution adheres to the complex rules of the business.
  • ILM Features in the Database. Oracle, like a lot of vendors, introduced ILM features into the database to help, what I like to call storage manage the data. This provides a set of flexible options and features to allow database administrators a full range of possibilities for their data management needs. Here are the capabilities (refer to the Database Administration Guide for details of each capability):
    • Partitioning. One of the most common capabilities is using the Partitioning option. This allows a large table to be split up, storage wise, into parts or partitions using a partitioned tablespace. This breaks up the table into manageable pieces and allows the database administration to optimize the storage using hardware and/or software options. Some hardware vendors have inbuilt ILM facilities and this option allows you to target specific data partitions to different hardware capabilities or just split the data into trenches (for example to separate the retirement stages of data). Partitioning is also a valid option if you want to use hardware storage tiered based solutions to save money. In this scenario you would pout the less used data on cheaper storage (if you have it) to save costs. For Partitioning advice, refer to the product DBA Guides which outline the most common partitioning schemes used by customers.
    • Advanced Compression. One of the popular options is the using the Advanced Compression option. This allows administrators to set compression rules against the database based upon data usage. The compression is transparent to the product and compressed data can be co-located with uncompressed data with no special processing needed by the code. The compression covers a wide range of techniques including CLOB compression as well as data compression. Customers using Oracle Exadata can also use Hybrid Columnar Compression (HCC) for hardware assisted compression for greater flexibility.
    • Heat Map. One of the features added to Oracle Database 12c and above to help DBA's is the Heat Map. This is a facility where the database will track the usage patterns of the data in your database and give you feedback on the activity of the individual rows in the database. This is an important tool as it helps the DBA identify which data is actually being used by the business and is a useful tool for determining what is important to optimize. It is even useful in the active period to determine which data can be safely compressed as it has reduced update activity against it. It is a useful tool and is part of the autonomous capabilities of the database.
    • Automatic Data Optimization. The Automatic Data Optimization (ADO) is a feature of database that allows database administrations  to implement rules to manage storage capabilities based upon various metrics including heat map. For example, the DBA can put in a rule that says data if data in a specific table is not touched for X months then it should be compressed. The rules cover compression, partition movement, storage features etc and can be triggered by Heat Map or any other valid metric (even SQL procedure code can be used).
    • Transportable Tablespaces. One of the most expensive things you can do in the database is issue a DELETE statement. To avoid this in bulk in any ILM based solution, Oracle offers the ability to use the Partitioning option and create a virtual trash bin via a transportable tablespace. Using ADO or other capabilities you can move data into this tablespace and then using basic commands switch off the tablespace to do bulk removal quickly. An added advantages is that you can archive that tablespace and reconnect it later if needed.

The Oracle Utilities ILM solution is comprehensive and flexible using both a aspect for the business to define their retention and eligibility rules and the various capabilities of the ILM in the database for the database administrator to factor in their individual sites hardware and support policies. It is not as simple as removing data in most cases and the Oracle Utilities ILM solution reduces the risk of managing your data, taking to account both your business and storage needs.

For more information about the Oracle Utilities ILM solution, refer to the ILM Planning Guide (Doc Id: 1682436.1) available from My Oracle Support and read the product DBA Guides for product specific advice.

Oracle Utilities and the Oracle Database In-Memory Option

$
0
0

A few years ago, Oracle introduced an In-Memory option for the database to optimize analytical style applications. In Oracle Database 12c and above, the In-Memory option has been enhanced to support other types of workloads. All Oracle Utilities products are now certified to use the Oracle In-Memory option, on Oracle Database 12c and above, to allow customers to optimize the operational and analytical aspects of the products.

The Oracle In-Memory option is a memory based column store that co-exists with existing caching schemes used within Oracle to deliver faster access speeds for complex queries across the products. It is transparent to the product code and can be easily implemented with a few simple changes to the database to implement the objects to store in memory. Once configured the Oracle Cost Based Optimizer becomes aware of the data loaded into memory and adjusts the execution plan directly, delivering much better performance in almost all cases.

There are just a few option changes that need to be done:

  • Enable the In-Memory Option. The In-Memory capability is actually already in the database software already (no relinking necessary) but it is disabled by default. After licensing the option, you can enable the option by setting the amount of the SGA you want to use for the In-Memory store. Remember to ensure that the SGA is large enough to cover the existing memory areas as well as the In-Memory Data Store. These are setting a few database initialization parameters.
  • Enable Adaptive Plans. To tell the optimizer that you now want to take into account the In-Memory Option, you need to enable Adaptive Plans to enable support. This is flexible where you can actually turn off the In-Memory support without changing In-Memory settings.
  • Decide the Objects to Load into Memory. Now that the In-Memory Option is enabled the next step is to decide what is actually loaded into memory. Oracle provides an In-Memory Advisor that analyzes workloads to make suggestions.
  • Alter Objects to Load into Memory. Create the SQL DDL statements to specify the statements to instruct the loading of objects into memory. This includes priority and compression options for the objects to maximize flexibility of the option. The In-Memory Advisor can be configured to generate these statements from its analysis.

No changes to the code is necessary to use the option to speed up common queries in the products and analytical queries.

A new ImplementingOracle In-Memory Option(Doc Id:2404696.1) whitepaper available from My Oracle Support has been published which outlines details of this process as well as specific guidelines for implementing this option.

PS. The Oracle In-Memory Option has been significantly enhanced in Oracle Database 18c.

 


Oracle WebLogic 12.2.1.x Configuration Guide for Oracle Utilities available

$
0
0

A new guide whitepaper is now available for use with Oracle Utilities Application Framework based products that support Oracle WebLogic 12.2.1.x and above. The whitepaper walks through the setup of the domain using the Fusion Domain Templates instead of the templates supplied with the product. In future releases, Oracle Utilities Application Framework the product specific domain templates will not be supplied as the Fusion Domain Templates take more of a prominent role in deploying Oracle Utilities products.

The whitepaper covers the following topics:

  • Setting up the Domain for Oracle Utilities products
  • Additional Web Services configuration
  • Configuration of Global Flush functionality in Oracle WebLogic 12.2.1.x
  • Frequently asked installation questions

The whitepaper is available as Oracle WebLogic 12.2.1.x Configuration Guide (Doc Id: 2413918.1) from My Oracle Support.

Updated Technical Best Practices

$
0
0

The Oracle Utilities Application Framework Technical Best Practices have been revamped and updated to reflect new advice, new versions and the cloud implementations of the Oracle Utilities Application Framework based products. The following summary of changes have been performed:

  • Formatting change. The whitepaper uses a new template for the content which is being rolled out across Oracle products.

  • Removed out of date advice. Advice that was on older versions and is not appropriate anymore has been removed from the document. This is ongoing to keep the whitepaper current and optimal.
  • Added Configuration Migration Assistant advice. With the increased emphasis of the use of CMA we have added a section outlining some techniques on how to optimize the use of CMA in any implementation.
  • Added Optimization Techniques advice. With the implementation of the cloud, there are various techniques we use to reduce our costs and risks on that platform. We added a section outlining some common techniques can be reused for on-premise implementations. This is based upon a series of talks given at customer forums the last year or so.
  • Added Preparation Your Implementation For the Cloud advice. This is a new section outlining the various techniques that can be used to prepare an on-premise implementation for moving to the Oracle Utilities Cloud SaaS Services. This is based upon a series of talks given at customer forums the last year or so.

The new version of the whitepaper is available at Technical Best Practices (Doc Id: 560367.1) from My Oracle Support.

New Oracle Utilities Testing Accelerator (6.0.0.0)

$
0
0

I am pleased to announce the next chapter in automated testing solutions for Oracle Utilities products. In the past some Oracle Utilities products have used Oracle Application Testing Suite with some content to provide an amazing functional and regression testing solution. Building upon that success, a new solution named the Oracle Utilities Testing Accelerator has been introduced that is an new optimized and focused solution for Oracle Utilities products.

The new solution has the following benefits:

  • Component Based. As with the Oracle's other testing solutions, this new solution is based upon testing components and flows with flow generation and databank support. Those capabilities were popular with our existing testing solution customers and exist in expanded forms in the new solution.
  • Comprehensive Content for Oracle Utilities. As with Oracle's other testing solutions, supported products provided pre-built content to significantly reduce costs in adoption of automation. In this solution, the number of product within the Oracle Utilities portfolio has greatly expanded to provide content. This now includes both on-premise product as well as our growing portfolio of cloud based solutions.
  • Self Contained Solution.  The Oracle Utilities Testing Accelerator architecture has been simplified to allow customers to quickly deploy the product with the minimum of fuss and prerequisites.
  • Used by Product QA. The Oracle Utilities Product QA teams use this product on a daily basis to verify the Oracle Utilities products. This means that the content provided has been certified for use on supported Oracle Utilities products and reduces risk of adoption of automation.
  • Behavior-Driven Development Support. One of most exciting capabilities introduced in this new solution, is the support for Behavior-Driven Development (BDD), which is popular with the newer Agile based implementation approaches. One of the major goals of the new testing capability is reduce rework from the Agile process into the building of test assets. This new capability introduces Machine Learning into the testing arena for generating test flows from Gherkin syntax documentation from Agile approaches. A developer can reuse their Gherkin specifications to generate a flow quickly without the need for rework. As the capability uses Machine Learning, it can be corrected if the assumptions it makes are incorrect for the flow and those corrections will be reused for any future flow generations. An example of this approach is shown below:

  • Selenium Based. The Oracle Utilities Testing Accelerator uses a Selenium based scripting language for greater flexibility across the different channels supported by the Oracle Utilities products. The script is generated automatically and does not need any alteration to be executed correctly.
  • Data Independence. As with other Oracle's testing products, data is supported independently of the flow and components. This translates into greater flexibility and greater levels of reuse in using automated testing. It is possible to change data at anytime during the process to explore greater possibilities in testing.
  • Support for Flexible Deployments. Whilst the focus of the Oracle Utilities Testing Accelerator is functional and/or regression testing.
  • Beyond Functional Testing. The Oracle Utilities Testing Accelerator is designed to be used for testing beyond just functional testing. It can be used to perform testing in flexible scenarios including:
    • Patch Testing. The Oracle Utilities Testing Accelerator can be used to assess the impact of product patches on business processes using the flows as a regression test.
    • Extension Release Testing. The Oracle Utilities Testing Accelerator can be used to assess the impact of releases of extensions from the Oracle Utilities SDK (via the migration tools in the SDK) or after a Configuration Migration Assistant (CMA) migration.
    • Sanity Testing. In the Oracle Cloud the Oracle Utilities Testing Accelerator is being used to assess the state of a new instance of the product including its availability and that the necessary data is setup ensuring the instance is ready for use.
    • Cross Oracle Utilities Product Testing. The Oracle Utilities Testing Accelerator supports flows that cross Oracle Utilities product boundaries to model end to end processes when multiple Oracle Utilities products are involved.
    • Blue/Green Testing. In the Oracle Cloud, zero outage upgrades are a key part of the solution offering. The Oracle Utilities Testing Accelerator supports the concept of blue/green deployment testing to allow multiple versions to be able to be tested to facilitate smooth upgrade transitions.
  • Lower Skills Required. The Oracle Utilities Testing Accelerator has been designed with the testing users in mind. Traditional automation involves using recording using a scripting language that embeds the data and logic into a script that is available for a programmer to alter to make it more flexible. The Oracle Utilities Testing Accelerator uses an orchestration metaphor to allow a lower skilled person, not a programmer, to build test flows and generate, no touch, scripts to be executed.

An example of the Oracle Utilities Testing Accelerator Workbench:

New Architecture

The Oracle Utilities Testing Accelerator has been re-architectured to be optimized for use with Oracle Utilities products:

  • Self Contained Solution. The new design is around simplicity. As much as possible the configuration is designed to be used with minimal configuration.
  • Minimal Prerequisites. The Oracle Utilities Testing Accelerator only requires Java to execute and a Database schema to store its data. Allocations for non-production for existing Oracle Utilities product licenses are sufficient to use for this solution. No additional database licenses are required by default.
  • Runs on same platforms as Oracle Utilities applications. The solution is designed to run on the same operating system and database combinations supported with the Oracle Utilities products.

The architecture is simple:

UTA 6.0.0.0 Architecture

  • Product Components. A library of components from the Product QA teams ready to use with the Oracle Utilities Testing Accelerator. You decide which libraries you want to enable.
  • Oracle Utilities Testing Accelerator Workbench. A web based design toolset to manage and orchestrate your test assets. Includes the following components:
    • Embedded Web Application Server. A preset simple configuration and runtime to house the workbench.
    • Testing Dashboard. A new home page outlining the state of the components and flows installed as well as notifications for any approvals and assets ready for use.
    • Component Manager. A Component Manager to allow you to add custom component and manage the components available to use in flows.
    • Flow Manager. A Flow Manager allowing testers to orchestrate flows and manage their lifecycle including generation of selenium assets for execution.
    • Script Management. A script manager used to generate scripts and databanks for flows.
    • Security. A role based model to support administration, development of components/flows and approvals of components/flows.
  • Oracle Utilities Testing Accelerator Schema. A set of database objects that can be stored in any edition of Oracle (PDB or non-PDB is supported) for storing assets and configuration.
  • Oracle Utilities Testing Accelerator Eclipsed based Plug In. An Oxygen compatible Eclipse plugin that executes the tests including recording of performance and payloads for details test analysis.
New Content

The Oracle Utilities Testing Accelerator has expanded the release of the number of products supported and now includes Oracle Utilities Application Framework based products and Cloud Services Products. New content will be released on a regular basis to provide additional coverage for components and a set of prebuilt flows that can be used across products.

Note: Refer to the release notes for supported Oracle Utilities products and assets provided.

Conclusion

The Oracle Utilities Testing Accelerator provides a comprehensive testing solution, optimized for Oracle Utilities products, with content provided by Oracle to allow implementation to realize lower cost and lower risk adoption of automated testing.

For more information about this solution, refer to the Oracle Utilities Testing Accelerator Overview and Frequently Asked Questions (Doc Id: 2014163.1) available from My Oracle Support.

Note: The Oracle Utilities Testing Accelerator is a replacement for the older Oracle Functional Testing Advanced Pack for Oracle Utilities. Customers on that product should migrate to this new platform. Utilities to convert any custom components from the Oracle Application Testing Suite platform are provided with this tool.

Using Groovy Whitepaper available

$
0
0

Groovy is an alternative language for building extensions for Oracle Utilities Application Framework based products for on-premise and cloud implementations. For Cloud implementations it is the preferred language replacing java based extensions typically available for on-premise implementations. The implementation of Groovy in the Oracle Utilities Application Framework extends the scripting object to allow Groovy script, Groovy includes and Groovy libraries to be implemented. This is all controlled using a whitelist to ensure that the code is appropriate for the cloud implementation.

A new whitepaper is available outlining the Groovy capability as well as some guidelines on how to use Groovy to extend Oracle Utilities products. It is available as Using Groovy Script in Oracle Utilities Applications (Doc Id: 2427512.1) from My Oracle Support.

Keep up to Date With Critical Patches

$
0
0

One of the most important recommendations I give to customers is to keep up to date with the latest patches, especially all the security patches, to improve performance and reduce risk.

For more information refer to the following sites:

Oracle WebLogic, Oracle Linux, Oracle Solaris and Oracle Database patches apply to Oracle Utilities products.

Patches available for Internet Explorer 11 performance

$
0
0

A number of Oracle Utilities Customer and Billing customers have reported some performance issues with Internet Explorer 11 in particular situations. After analysis, it was ascertained that the issue was within Internet Explorer itself. An article is available at Known UI Performance Issues on Internet Explorer 11 (Doc Id: 2430962.1) from My Oracle Support with an explanation of the issues and advice on patches recommended to install to minimize the issue for affected versions.

It is highly recommended to read the article and install the patches to minimize any issues with Internet Explorer 11.

Oracle Utilities Testing Accelerator Whitepaper Updates

$
0
0

The Oracle Utilities Testing Accelerator Whitepaper has been updated with the latest information about the Testing capability optimized for Oracle Utilities.

The documentation is available at Oracle Utilities Testing Accelerator for Oracle Utilities (Doc Id: 2014163.1) from My Oracle Support.

The article includes the following updated documents:

  • Oracle Utilities Testing Accelerator Overview - Overview of the testing solution and how it optimizes the test experience.
  • Oracle Utilities Testing AcceleratorFrequently Asked Questions - Set of common questions and answers about the Oracle Utilities Testing Accelerator including migration from the previous Oracle Application Testing Suite based solution.
  • Oracle Utilities Testing Accelerator Data Sheet (New) -  A brochure about the Oracle Utilities Testing Accelerator.

Oracle Utilities Testing Accelerator training is now available via the Oracle University training on-demand.


Oracle Utilities Application Framework V4.3.0.6.0 Release

$
0
0

Oracle Utilities Application Framework V4.3.0.6.0 based products will be released over the coming months. As with past release the Oracle Utilities Application Framework has been enhanced with new and updated features for on-premise, hybrid and cloud implementations of Oracle Utilities products.

The Oracle Utilities Application Framework continues to provide a flexible and wide ranging set of common services and technology to allow implementations the ability to meet the needs of their customers.  The latest release provides a wide range of new and updated capabilities to reduce costs and introduce exciting new functionality. The products ships with a complete listing of the changes and new functionality but here are some highlights:

  • Improved REST Support - The REST support for the product has been enhanced in this release. It is now possible to register REST Services in Inbound Web Services as REST. Inbound Web Services definitions have been enhanced to support both SOAP and REST Services. This has the advantage that the registration of integration is now centralized and the server URL for the services can be customized to suit individual requirements. It is now possible to register multiple REST Services within a single Inbound Web Services to reduce costs in management and operations. Execution of the REST Services has been enhanced to use the Registry as the first reference for a service. No additional deployment effort is necessary for this capability. A separate article on this topic will provide additional information.
  • Improved Web Registry Support for Integration Cloud Service - With the changes in REST and other integration changes such as Categories and supporting other adapters, the Web Service Catalog has been expanded to add support REST and other services directly for integration registration for use in the Oracle Integration Cloud.
  • File Access Adapter - In this release a File Adapter has been introduced to allow implementations to parameterize all file integration to reduce costs of management of file paths and ease the path to the Oracle Cloud. In Cloud implementations, an additional adapter is available to allow additional storage on the Oracle Object Storage Cloud to supplement cloud storage for Oracle Utilities SaaS solutions. The File Access Adapter includes an Extendable Lookup to define alias and physical location attributes. That lookup can then be used an alias for file paths in Batch Controls, etc.. A separate article on this topic will provide additional information.
  • Batch Start/End Date Time now part of Batch Instance Object - In past releases the Batch Start and End Dates and times where located as data elements with the thread attributes. This made analysis harder to perform. In this release these fields have been promoted as reportable fields directly on the Batch Instance Object for each thread. This will improve capabilities for reporting performance of batch jobs. For backward compatibility, these fields are only populated for new executions. The internal Business Service F1-GetBatchRunStartEnd has been extended to support the new columns and also detect old executions to return the correct values regardless.
  • New Level of Service Algorithms - In past releases, Batch Level Of Service required the building of custom algorithms for checking batch levels. In this release additional base algorithms for common scenarios like Total Run Time, Throughput and Error Rate are now provided for use. Additionally, it is now possible to define multiple Batch Level Of Service algorithms to model complex requirements. The Health Check API has been enhanced to return the Batch Level Of Service as well as other health parameters. A separate article on this topic will provide additional information.
  • Job Scope in DBMS_SCHEDULER interface - The DBMS_SCHEDULER Interface allowed for specification Batch Control and Global level of parameters as well as at runtime. In this release, it is possible to pre-define parameters within the interface at the Job level, allowing for support for control of individual instances Batch Controls that are used more than once across chains.
  • Ad-hoc Recalculation of To Do Priority - In the past release of the Oracle Utilities Application Framework an algorithm to dynamically reassess ad recalculate a To Do Priority was introduced. In this release, it is possible to invoke this algorithm in bulk using the new provided F1-TDCLP Batch Control.  This can be used with the algorithm to reassess To Do's to improve manual processing.
  • Introduction of a To Do Monitor Process and Algorithm - One of the issues with To Do's in the field has been that users can forget to manually close the To Do when the issue that caused the condition has been resolved. In this release a new batch control F1-TDMON and a new Monitor algorithm on the To Do Type has been added so that logic can be introduced to detect the resolution of the issue can lead to the product automatically closing the To Do.
  • New Schema Editor - Based upon feedback from partners and customers, the usability and capabilities of the Schema Editor have been improved to provide more information as part of the basic views to reduce rework and support cross browser development.
  • Process Flow Editor - A new capability has been added to the Oracle Utilities Application Framework to allow for complex workflows to be modeled and fully capable workflow introduced. This includes train support (including advanced navigation), saving incomplete work support, branching and object integration. This process flow editor was introduced internally successfully to use for our cloud automation in the Oracle Utilities Cloud Services Foundation and has now been introduced, in a new format, for use across the Oracle Utilities Application Framework based products. A separate article on this topic will provide additional information.
  • Improved Google Chrome Support - This release introduces extensive Google Chrome for Business support. Check the availability with each of the individual Oracle Utilities Application Framework based products.
  • New Cube Viewer - In the Oracle Utilities Market Settlements product we introduced a new Cube Viewer to embed advanced analytics into our products. That capability has been made generic and now included in the Oracle Utilities Application Framework so that products and implementations can now build their own cube analytical capabilities. In this release a series of new objects and ConfigTools objects have been introduce to build Cube Viewer based solutions. Note: The Cube Viewer has been built to operate independently of Oracle In-Memory Database support but would greatly benefit from use with Oracle In-Memory Database. A separate article on this topic will provide additional information.
  • Object Erasure Support - To support various data privacy regulations introduced across the world, a new Object Erasure capability has been introduced to manage the erasure or obfuscation of master objects within the Oracle Utilities Application Framework based products. This capability is complementary to the Information Lifecycle Management (ILM) capability introduced to manage transaction objects within the product. A number of objects and ConfigTools objects have been introduced to allow implementations to add Object Erasure to their implementations. A separate article on this topic will provide additional information.
  • Proactive Update ILM Switch Support - In past release, ILM eligibility and the ILM switch was performed in bulk exclusively by the ILM batch processes or using the Automatic Data Optimization (ADO) feature of the Oracle Database. To work more efficiently, it is now possible use the new BO Enter Status plug-in and BO Exit Status plug-in to proactively assess the eligibility and set the ILM switch as part of processing, thus reducing ILM workloads.
  • Mobile Framework Auto Deploy Support - This releases includes a new optional parameter to auto deploy mobile content automatically when a deployment is saved. This can avoid the extra manual deployment step, if desired.
  • Required Indicator on Legacy Screens - In past releases, the required indicator, based upon meta data, has been introduced for ConfigTools based objects, in this release it has been extended to Oracle Utilities Application Framework using legacy screens built using the Oracle Utilities SDK or custom JSP (that confirm to the standards required by the Oracle Utilities Application Framework). Note: Some custom JSP's may contain logic to prevent the correct display the required indicator.
  • Oracle Identity Manager integration improved - In this release the integration with Oracle Identity Manager has been improved with multiple adapters supported and the parameters are now located as a Feature Configuration rather than properties settings. This allows the integration setup to be migrated using Configuration Migration Assistant.
  • Outbound Message Mediator Improvements - In previous releases, implementations were required to use the Outbound Message Dispatcher (F1-OutmsgMediator) business services to send an outbound message without instantiating it but where the outbound message Business Object pre-processing algorithms need to be executed.  This business service orchestrated a creation and deletion of the outbound message, which is not desirable for performance reasons. The alternate business service Outbound Message Mediator (F1-OutmsgMediator) routes a message without instantiating anything, so is preferred when the outbound message should not be instantiated.  However, the Mediator did not execute the Business Object pre-processing algorithms.  In this release the Mediator business service has been enhanced to also execute the Business Object pre-processing algorithms.
  • Deprecations - In this release a few technologies and capabilities will be removed as they were announced in previous releases. These include:
    • XAI Servlet/MPL - After announcing the deprecation of XAI and MPL in 2012, the servlet and MPL software are no longer available in this release. XAI Objects are retained for backward compatibility and last minute migrations to IWS and OSB respectively.
    • Batch On WebLogic - In the Oracle Cloud, batch threadpools were managed under Oracle WebLogic. Given changes to the architecture over the last few releases, the support for threadpools is no longer supported. As this functionality was never released for use on-premise customers, this change does not have any impact to on-premise customers.
    • WebLogic Templates - With the adoption of Oracle WebLogic 12.2+, the necessity of custom WebLogic templates was no longer necessary. It is now possible to use the standard Fusion Middleware templates supplied with Oracle WebLogic with a few manual steps. These additional manual steps are documented in the new version of the Installation Guide supplied with the product. Customers may continue to use the Domain Builder supplied with Oracle WebLogic to build custom templates post Oracle Utilities Application Framework product installation. Customers should stop using the Native Installation or Clustering whitepaper documentation for Oracle Utilities Application Framework V4.3.0.5.0 and above as this information is now inside the Installation Guide directly or Oracle WebLogic 12.2.1.x Configuration Guide (Doc Id: 2413918.1) available from My Oracle Support.

A number of additional articles will be published over the next few weeks going over some of these topics as well as updates to key whitepapers will be published.

Inbound Web Services - REST Services

$
0
0

In Oracle Utilities Application Framework V4.3.0.6.0, the Inbound Web Services object has been extended to support both SOAP and REST based services. This has a lot of advantages:

  • Centralized web services registration. The interface Application Programming Interface (API) are now centralized in the Inbound Web Services object. This means you can manage all your programmatic interfaces from a single object. This helps when using the Web Service Catalog used for Oracle Integration Cloud Service as well as any API management capabilities.
  • Isolation from change. One of the major features of the REST capability within Inbound Web Services is the the URI is no longer fixed but can be different from the underlying service. This means you can isolate your interface clients from changes.
  • Standardization. The Inbound Web Services object has inherent standards that can be reused across both SOAP and REST based services. For example, the ConfigTools object model can be directly wired into the service reducing time.
  • Reduced cost of maintenance. One of the features of the new capability is to group all your interfaces into a minimal number of registrations. This reduces maintenance and allows you to control groups of interfaces easily.

The Inbound Web Services now supports two Web Service Classes:

  • SOAP - Traditional XAI and IWS based services based around the SOAP protocol. These services will be deployed to the Oracle WebLogic Server.
  • REST - RESTful based services that are now registered for use. These services are NOT deployed as they are used directly using the REST execution engine.

Inbound Web Service Business Object

For REST Services, a new optimized maintenance function is now available. This facility has the following capabilities:

  • Multiple Services in one definition. It is now possible to define multiple REST services in one registration. This reduces maintenance effort and the interfaces can be enabled and disabled at the Inbound Web Service level. Each REST Service is regarded as an operation on the Inbound Web Service.
  • Customizable URI for service. The URL used for the REST Service can be the same or different than the operation.
  • Business Object Support. In past releases, Business Objects were not supported. In this release, there are some limited support for Business Objects. Refer to the Release Notes and online documentation for clarification of level of support.
  • Open API Support.  This release introduces Open API support for documenting the REST API.

For example, the new Inbound Web Services maintenance function for REST is as follows:

Example REST Inbound Web Service definition

Active REST Services are available to the REST execution engine.

Open API (OAS3) Support has been introduced which provides the following:

  • Documentation of the API in various formats. The documentation of the REST based API based upon the meta data stored in the product.
  • Ability to authorize Inbound Web Services directly in Open API. It is possible to authorize the API directly from the Open API documentation. Developers can check the API prior to making it active.
  • Multiple formats supported. Developers can view payloads in various formats including Model format.
  • Ability to download the API. You can download the API directly from the documentation in Open API format. This allows the API to be imported into Development IDE's.
  • Ability to testing inline. Active API's can be tested directly into the documentation.

The following are examples of the documentation:

API Header including Authorization (Note: Server URL is generic as this server is NOT active).

Open API Support - Authorization

Operation/API List:

Open API Support - URL List

Request API with inbuilt testing facility:

Open API Support - API Request

Response API with response codes:

Open API Support - API Response

Model Format:

Open API Support - Model List

For more information about REST support, refer to the online documentation or Web Services Best Practices (Doc Id: 2214375.1) from My Oracle Support.

Oracle Utilities Customer To Meter/Customer Care And Billing 2.7.0.0.0 is available

$
0
0

Oracle Utilities Customer To Meter 2.7.0.0.0 and Oracle Utilities Customer Care And Billing 2.7.0.0.0 are now available for download from Oracle eDelivery Cloud. These new releases are based upon Oracle Utilities Application Framework 4.3.0.6.0 with new and updated functionality.

For details of the release, refer to the release notes and documentation available from Oracle eDelivery Cloudand Oracle Utilities Help Center.

Object Erasure capability introduced in 4.3.0.6.0

$
0
0

With data privacy regulations around the world being strengthened data management principles need to be extended to most objects in the product. In the past,Information Lifecycle Management (ILM) was introduced for transaction object management and is continued to be used today in implementations for effective data management. When designing the ILM capability, it did not make sense to extend it to be used for Master data such as Account, Persons, Premises, Meters, Assets, Crews etc as data management and privacy rules tend to be different for these types of objects.

In Oracle Utilities Application Framework V4.3.0.6.0, we have introduced Object Erasure to support Master Data and take into account purging as well as obfuscation of data. This new capability is complementary to Information Lifecycle Management to offer full data management capability. This new capability does not replace Information Lifecycle Management or depends on Information Lifecycle Management being licensed. Customers using Information Lifecycle Management in conjunction with Object Erasure can implement full end to end data management capabilities.

The idea behind Object Erasure is as follows:

  • Any algorithm can call the Manage Erasure algorithm on the associated Maintenance Object to check for the conditions to ascertain that the object is eligible for object erasure. This is flexible to allow implementations to have the flexibility to initiate the process from a wide range of possibilities. This can be as simple as checking some key fields or some key data on an object (you decide the criteria). The Manage Erasure algorithm is used to detect the conditions, collate relevant information and call the F1-ManageErasureSchedule Business Service to create an Erasure Schedule Business Object in a Pending state to initiate the process. A set of generic Erasure Schedule Business Objects is provided (for example, a generic Purge Object for use in Purging data) and you can create your own to record additional information.
  • The Erasure Schedule BO has three states which can be configured with algorithms (usually Enter Algorithms, a set are provided for reuse with the product).
    • Pending - This is the initial state of the erasure
    • Erased - This is the most common final state indicating the object has been erased or been obfuscated.
    • Discarded - This is an alternative final state where the record can be parked (for example, if the object becomes eligible, an error has occurred in the erasure or reversal of obfuscation is required).
  • A new Erasure Monitor (F1-OESMN) Batch Control can be used to transition the Erasure Schedule through its states and perform the erasure or obfuscation activity.

Here is a summary of this processing:

Erasure Flow

Note: The base supplied Purge Enter algorithm (F1-OBJERSPRG) can be used for most requirements. It should be noted that it does not remove the object from the _K Key tables to avoid conflicts when reallocating identifiers.

The solution has been designed with a portal to link all the element together easily and the product comes with a set of pre-defined objects ready to use. The portal also allows an implementer to configure Erasure Days which is effectively the number of days the record remains in the Erasure Schedule before being considered by the Erasure Monitor (a waiting period basically).

Erasure Configuration

As an implementer you can just build the Manage Erasure algorithm to detect the business event or you can also write the algorithms to perform all of the processing (and every variation in between). The Erasure will respect any business rules configured for the Maintenance Object so the erasure or obfuscation will only occur if the business rules permit it.

Customers using Information Lifecycle Management can manage the storage of Erasure Schedule objects using Information Lifecycle Management.

Objects Provided

The Object Erasure capability supplies a number of objects you can use for your implementation:

  • Set of Business Objects. A number of Erasure Schedule Business Objects such as F1-ErasureScheduleRoot (Base Object), F1-ErasureScheduleCommon (Generic Object for Purges) and F1-ErasureScheduleUser (for user record obfuscation). Each product may ship additional Business Objects.
  • Common Business Services. A number of Business Services including F1-ManageErasureSchedule to use within your Manage Erasure algorithm to create the necessary Erasure Schedule Object.
  • Set of Manage Erasure Algorithms. For each predefined Object Erasure object provided with the product, a set of Manage Erasure algorithms are supplied to be connected to the relevant Maintenance Object.
  • Erasure Monitor Batch Control. The F1-OESMN Batch Control provided to manage the Erasure Schedule Object state transition.
  • Enter Algorithms. A set of predefined Enter algorithms to use with the Erasure Schedule Object to perform common outcomes including Purge processing.
  • Erasure Portal. A portal to display and maintain the Object Erasure configuration.
Refer to the online documentation for further advice on Object Erasure.

New File Adapter - Native File Storage

$
0
0

In Oracle Utilities Application Framework V4.3.0.6.0, a new File Adapter has been introduced to parameterize locations across environments. In previous releases, environment variables or path's where hard coded to implement locations of files.

With the introduction of the Oracle Utilities Cloud SaaS Services, the location of files are standardized and to reduce maintenance costs, these paths are not parameterized using an Extendable Lookup (F1-FileStorage) defining the path alias and the physical location. The on-premise version of the Oracle Utilities Application Framework V4.3.0.6.0 supports local storage (including network storage) using this facility. The Oracle Utilities Cloud SaaS version supports both local (predefined) and Oracle Object Storage Cloud.

For example:

Example Lookup

To use the alias in any FILE-PATH (for example) the URL is used in the FILE-PATH:

file-storage://MYFILES/mydirectory  (if you want to specify a subdirectory under the alias)

or

file-storage://MYFILES

Now, if you migrate to another environment (the lookup is migrated using Configuration Migration Assistant) then this record can be altered. If you are moving to the Cloud then this adapter can change to Oracle Object Storage Cloud. This reduces the need to change individual places that uses the alias.

It is recommended to take advantage of this capability:

  • Create an alias per location you read or write files from in your Batch Controls. Define it using the Native File Storage adapter. Try and create the minimum number of alias as possible to reduce maintenance costs.
  • Change all the FILE-PATH parameters in your batch controls to the use the relevant file-storage URL.

If you decide to migrate to the Oracle Utilities SaaS Cloud, these Extensable Lookup values will be the only thing that changes to realign the implementation to the relevant location on the Cloud instance. For on-premise implementation and the cloud, these definitions are now able to be migrated using Configuration Migration Assistant.

Viewing all 311 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>