Sunday, October 13, 2013

Tracing a J2EE application in Database

To trace an existing J2EE application (or a legacy application,legacy here means the ones still not using CDI) at database layer is not easy, especially if that application does not have any reference to the user whom you want to trace. A cumbersome way would be to pass the user name or id from the view layer to each method you call on model layer and then pass it further down to class method from which you obtain the database connection. But, there is a far easy solution that i am going to discuss in this post. The solution that i am going to discuss can be used to enable database tracing for any legacy application and that too far easily.

The major issue with the existing applications is that they cannot access HttpSession from the model layer and hence cannot obtain the user id or user name of the user. To overcome this scenario we can use ThreadLocal class or any implementation of it (in this post i am going to use slf4j MDC class). A ThreadLocal variable is local to the currently executing thread and it cannot be altered by a concurrent thread,so we can use this variable to store the user information. But in case of web applications, during a user’s session, it is most likely that each of his/her request will be handled by a separate thread, So to ensure that the user’s information is kept stored in ThreadLocal variable, we can use a filter which can take the user id from the HttpSession variable and store it in the ThreadLocal variable. Also to avoid memory leaks we can remove the variable once a request is completed. Once this variable is stored it can be accessed from any class that is called by this thread, hence we easily achieve the goal of getting the information we need to enable the trace at database layer. The following code snippets show how it can be achieved.

The Filter Class :-

import org.slf4j.MDC;

public class UserIdInjectingFilter implements Filter{

public void doFilter(ServletRequest request, ServletResponse response,
FilterChain chain) throws IOException, ServletException {

HttpSession session=((HttpServletRequest)request).getSession(false);

if(session!=null){
if(session.getAttribute("userID")!=null){
//Here we populate the MDC
MDC.put("userID", (String)session.getAttribute("userID"));

}
}
chain.doFilter(request, response);

finally{
//Be sure to remove it, will cause memory leaks and permgen out of space errors. if not done so
MDC.remove("userID");
}


}
The central database connection management class methods:-
.....

private Connection connection=null;

Connection getDBConnection(){

CallableStatement cs=null;

try{
Context ctx=new InitialContext();
Context initContext = new InitialContext();
DataSource ds=(DataSource)initContext.lookup("jdbc/TestDS");
connection=ds.getConnection();
//get the value from thread local variable
String userId=MDC.get("userID");

cs=connection.prepareCall("begin set_cid(?,?,?); end;");
cs.setString(1, userId);
String invokingMethodName=Thread.currentThread().getStackTrace()[3].getMethodName();
String invokingClassName=Thread.currentThread().getStackTrace()[3].getClassName();
cs.setString(2,invokingClassName);
cs.setString(3,invokingMethodName);
cs.executeUpdate();
}catch(NamingException nameEx){
// handle exception here
}

// Be Specific :-)

catch(Exception sqlEx){
// catch your exception here
}

finally{

try {
cs.close();
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}

}
return connection;
}

/**
*Before closing the connection unset the bunch of identifiers
*/
public void closeConnection(){
//Bunch of close statements
if(connection != null && connection.isClosed()==false){
CallableStatement cs=connection.prepareCall("begin clear_cid(); end;");
cs.executeUpdate();
cs.close();
connection.close();
}
}catch(SQLException sqlEx){
// handle your exception here
}


}
.....

The PL/SQL procedures to set the identifiers:
create or replace procedure set_cid(p_cid varchar2,p_module_id varchar2,p_method_id varchar2)
is
begin
DBMS_APPLICATION_INFO.SET_CLIENT_INFO (p_cid);
DBMS_APPLICATION_INFO.SET_MODULE (p_module_id,p_method_id);

end set_cid;
create or replace procedure clear_cid
is
begin
DBMS_APPLICATION_INFO.SET_CLIENT_INFO (' ');
DBMS_APPLICATION_INFO.SET_MODULE ('','');

end clear_cid;

The query to see the details:-

select client_info,module,action from v$session

 


Hope this helps !

Tuesday, October 1, 2013

OIM 11g cloning strategy

For 10g OIM there was a metalink documentation available for cloning production OIM environment to DR environment. For 11g, however the documentation is not available. I had to do the cloning of 11g environment for a client; the steps performed to accomplish the same are mentioned below:-

For Initial Cloning (Assuming DR is not setup yet)

  1. Do a fresh install of Weblogic,OIM and SOA and run the respective configuration wizards
  2. Apply the bundle patches if any are applied to the existing production environment.
  3. Drop the newly create schema and database users (This step is important).
  4. Create the user and database tablespaces corresponding to the earlier schemas.
  5. Update the data sources' DB URL's.
  6. Update the OIMAuthenticationProvider with correct values for DB URL, username and password.
  7. Update the Direct DB configuration with correct values for DB URL, username and password.
  8. Copy the files cwallet.sso,.xldatabasekey,xell.csr,xlserver.cert,default-keystore.jks from the existing production environment to the DR environment
  9. Update the IT resource information with correct target information using the OIM console.

On a repetitive basis

  1. Apply the bundle patches if any are applied to the existing production environment.
  2. Drop the newly create schema and database users.
  3. Create the user and database tablespaces corresponding to the earlier schemas.
  4. Update the data sources' DB URL's.
  5. Update the OIMAuthenticationProvider with correct values for DB URL, username and password.
  6. Update the Direct DB configuration with correct values for DB URL, username and password.
  7. Copy the files cwallet.sso,.xldatabasekey,xell.csr,xlserver.cert,default-keystore.jks from the existing production environment to the DR environment
  8. Update the IT resource information with correct target information using the OIM console.

Patching strategy

  • Whatever patches are applied to the OIM in production environment have to be applied to the DR environment separately.

Hope this is useful.

Friday, September 27, 2013

Auto Group Membership Event Handler not firing in OIM 11g

In OIM, while user creation, the Auto Group Membership Event handler is used to evaluate the membership rules and based upon these membership rules, the roles are automatically assigned to the newly created user. After the roles are assigned to the user, the evaluate user policies scheduled job runs to automatically provision resources based upon the user role.

I recently faced an issue in which the auto group membership event handler was not firing during user creation, so i raised a case with Oracle Support and found out that apparently, the custom event handlers written by me for populating certain fields such as user login were interfering with the built in event handler. To resolve this issue, i increased the order of firing of my event handlers to very high values specifically 20,000,20001  and this increase in value ensured that my event handlers did not interfere with the built in event handler and this resolved the problem.

You can change the order of event handlers by using mds explorer to modify the customeventhandler.xml file.

   <action-handler class="com.blogspot.ramannanda.oim.handlers.GenerateEmailId" entity-type="User" operation="CREATE" name="GenerateEmailId" stage="postprocess" order="200000" sync="FALSE"/>

Hope this helps.


Metalink DOC ID: 1469286.1

Monday, April 22, 2013

ADF examining memory leaks

Developers while creating the applications often ignore guidelines of closing result sets which is the most common cause of memory leaks. In ADF applications it is the RowSetIterators that need to be closed . This scenario can easily be simulated under load testing using jmeter and turning on memory sampling in JVisualVM.  In this post i will share an example of a one page application in which there is a depiction of master detail relationship b/w a department and its employees. There is a disabled af:inputtext on the fragment that is bound to a backing bean and its getter method has code for iterating the employee table and calculating the total employee salary. Initially i have created a secondary row set iterator and left it open just to examine the memory leaks under load that occur due to this open reference. The code in the backing bean is shown below.

    public Number getSum() {
DCBindingContainer bc=(DCBindingContainer)BindingContext.getCurrent().getCurrentBindingsEntry();
EmpVOImpl empVoImpl=(EmpVOImpl)bc.findIteratorBinding("EmpVO1Iterator").getViewObject();
Number totalAmount=new Number(0);
RowSetIterator it= empVoImpl.createRowSetIterator(null);
while(it.hasNext()){
EmpVORowImpl row=(EmpVORowImpl)it.next();
if(row.getSal()!=null){
totalAmount=row.getSal().add(totalAmount);
}
}
it.closeRowSetIterator();
this.sum=totalAmount;
return sum;
}

So to identify that you have open iterator issue, open the JVisualVM’s sampler tab and turn on memory sampling, when you do so you would see many instances of different types of classes but to identify open iterator issue you need to focus on ViewRowSetIteratorImpl instances and then to your project specific VOImpl classes through which you obtained the iterator in the first place. When ever you create a new RowSetIterator you are returned a ViewRowSetIteratorImpl class instance. As is obvious you would expect the number of instances to increase as you open the RowSetIterator instances and do not close them. Now when i ran the application under simulated load with JMeter with open iterator reference i could see the instances for ViewRowSetIteratorImpl class increase in number very quickly. So then i took a heap dump with JVisualVM and then clicked on the ViewRowSetIteratorImpl class in classes tab to open the instance view tab, then upon selecting an instance and querying for the nearest GC root (in the reference section) i could see that EmpVOImpl was holding a reference to the ViewRowSetIteratorImpl instance. You can also use OQL support in JVisualVM to find out the references which are keeping the object alive.

select  heap.livepaths(u,false) from oracle.jbo.server.ViewRowSetImpl u

adfgcroot


livepaths


After opening the EmpVOImpl instance by clicking on the node, I expanded the mviewrowset (it is a variable which holds ViewRowSetImpl  instance) and then further expanding its mViews property i could see a number of ViewRowSetIteratorImpl instances as shown below.


empvoimplholdingviewrowsetiteratorimpl


So to resolve this issue all i had to do was to close the open iterator by calling the closeRowSetIterator method on the RowSetIterator instance.


There are other alternatives also for identifying the problem in your source code by turning on profiling in JVisualVM but profiling on a production system is not recommended way of even approaching the issue.


The comparison b/w the application with open iterators and closed iterators is shown below.


 


adfmemoryleak nomemoryleakiteratorclose


In this application JMeter was used to simulate the load with loop controller configured to select different master rows so that upon selection change and partial refresh the getter method for the inputtext in backing bean was called again and again to increase the iterator instances.


jmeterconf


The application and jmx file can be downloaded from below links



  1. JMeter test plan

  2. Application

Friday, April 19, 2013

ADF exception handling scenario

A user on OTN forum was trying to extend the AdfcExceptionHandler in which he was trying to do redirect to an error page and facing Response Already Committed exception. On examining the scenario i could see that the phase in which the exception handler was being called was RENDER_RESPONSE and by this time it is too late to do a redirect as part of the response has already been submitted (one can check this by using response.iscommitted() method). So redirects/forwards should never be issued from the AdfcExceptionHandler in a conventional way.

The question also arises why someone needs to extend the AdfcExceptionHandler  in the first place, but, we need to do this as default exception handling will not handle exceptions that occur during render response phase for example: let’s say that during the application execution the connectivity to the DB goes down, the exceptions raised then will not be handled by the default exception handling mechanism.

So we cannot use conventional redirects but we have to do redirects somehow, the solution then is to use javascript redirect from the exception handler. The code for the exception handler is shown below.

package com.blogspot.ramannanda.view.handlers;
import java.sql.SQLException;
import java.util.HashMap;
import java.util.Map;
import javax.faces.context.ExternalContext;
import javax.faces.context.FacesContext;
import javax.faces.event.PhaseId;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import oracle.adf.view.rich.context.ExceptionHandler;
import oracle.adfinternal.controller.application.AdfcExceptionHandler;
import oracle.jbo.DMLException;
import org.apache.myfaces.trinidad.render.ExtendedRenderKitService;
import org.apache.myfaces.trinidad.util.Service;


public class MyAppUIErrorHandler extends AdfcExceptionHandler {
public MyAppUIErrorHandler() {
super();
}


@Override
public void handleException(FacesContext facesContext, Throwable throwable,
PhaseId phaseId) throws Throwable {
ExternalContext ctx =
FacesContext.getCurrentInstance().getExternalContext();
HttpServletRequest request = (HttpServletRequest)ctx.getRequest();
if (phaseId.getOrdinal() == PhaseId.RENDER_RESPONSE.getOrdinal()) {
//handle null pointer or sqlexception or runtime DMLException that might occur
if (throwable instanceof DMLException ||
throwable instanceof SQLException ||
throwable instanceof NullPointerException) {
String contextPath = request.getContextPath();
FacesContext context = FacesContext.getCurrentInstance();
StringBuffer script = new StringBuffer();
//set the window.location.href property this causes the redirect.
script.append("window.location.href= '").append(contextPath).append("/error.html';");
ExtendedRenderKitService erks =
Service.getRenderKitService(context,
ExtendedRenderKitService.class);
erks.addScript(context, script.toString());
//encoding script is required as just adding script does not work here
erks.encodeScripts(context);
return;
} else {
super.handleException(facesContext, throwable, phaseId);
}

}


}
}

So by using this code we are doing redirect to error.html page, I am also checking for phaseId here as the handler will be called for each phase in turn and we can do the handling in render response phase.


Note: Note that i have used encodeScripts method after adding the script and this is required because this method outputs any required scripts by RenderKit which in our case tends to be the script we added. 


 


adferrorhandling

Sunday, April 14, 2013

ADF Master Detail Table Scenarios

In this post i will share some scenarios that you might face with ADF Master Detail Table component. Typically those mentioned below:-

  1. Create a row in child table on creation of row in master table (uses Accessor to programmatically create a row in child )
  2. Using Cascade delete option on committed rows

The code sample is based upon scott schema and uses dept and emp tables. The dept table serves as master table and the emp table serves as child table. Also,the association relationship between the entities involved is a composition relationship.

There are a few pre-requisites for this sample to work, basically, you need to create triggers on both the tables and database sequences that will be used to populate  primary key values. The SQL script is present in the sample app.

 

1. Creating row in child table whenever a row is created in master table:-

To accomplish this i have created a method in the application module implementation class that creates a row for department programmatically and then uses the exposed accessor to create the row in child. The snippet is shown below :-

    /**
* Creates and inserts Dept and Emp Row
*/
public void createDeptAndEmpRow(){
DeptVOImpl deptVO=this.getDeptVO1();
DeptVORowImpl row=(DeptVORowImpl)deptVO.createRow();
deptVO.insertRow(row);
RowIterator iterator= row.getEmpVO();
Number deptNumber=row.getDeptno().getSequenceNumber();
NameValuePairs nvps=new NameValuePairs();
nvps.setAttribute("Deptno", deptNumber);
EmpVORowImpl empRow=(EmpVORowImpl)iterator.createAndInitRow(nvps);
iterator.insertRow(empRow);
}

Here i have used the *VOImpl and *VORowImpl class implementations. Also note  the partial triggers on employee table for “create department and employee button” and “Delete department” button which causes the employee table to refresh.


2. Using cascade delete option on committed rows:-


Default database foreign key constraints ensure that you cannot delete rows from a parent while records exist in child table, so if you issue a delete statement on master table i.e Department table you will receive a  foreign key constraint violation exception. The solution is to make the foreign key constraint deferrable which ensures that validation happens when you issue a  commit and not while you issue the delete statement. So to get this to work drop the existing constraint and recreate it as following.

ALTER TABLE emp
ADD CONSTRAINT fk_deptno
FOREIGN KEY (deptno)
REFERENCES dept (deptno)
ON DELETE CASCADE
DEFERRABLE
INITIALLY DEFERRED ;

 


Also note the “ON DELETE CASCADE” clause which will delete the records in child table whenever a row from master table is deleted.The additional benefit of having this is you do not have to write code for deleting rows from child instances.


 


The database scripts for triggers and sequences are present in the sample project which can be downloaded from here.


masterdetailexample


 associationrelationship

Sunday, March 3, 2013

OIM 11g GTC Status Provisioning and Reconciliation

If you have a requirement to reconcile user status from target resource and also to provision the value of user status into a target account you can follow this post.

Let’s say for example in the target resource the status is marked as A for enable and  D for disable. To accomplish this using GTC connector you will need a lookup based translation.

Provisioning:

For provisioning let’s assume that the target database application table has a column USER_STATUS corresponding to the Status column in OIM, Also the target has different statuses corresponding to the statuses in OIM. To accomplish the provisioning then follow these steps:-

  1. Create a lookup definition as shown in below snapshot

    oimprovstatuslookup
  2. Now create the GTC connector and do not choose the trusted source reconciliation as this is an example of target resource reconciliation and provisioning. Now map the OIM user dataset Status column to the provisioning staging dataset’s USER_STATUS column and choose “create mapping with translation” option. The mapping should be as shown in the below screenshot.

    provstatusmapping
  3. Now when the provisioning happens the USER_STATUS will be populated as A and when you will disable the user the USER_STATUS will be set as D on the target resource.

Reconciliation:

If you also have to reconcile the target’s resource status with OIM’s resource status  and the target resource has different values for the statuses than OIM then follow the below mentioned steps:-

  1. Create a another lookup (you could also use same lookup) that maps the target’s statuses to OIM’s statuses. Since this is target resource reconciliation the statuses will be Enabled and Disabled on OIM side But,if this were trusted resource reconciliation these would have been Active and Disabled on OIM side. The screenshot below shows the lookup.

    reconcilestatuslookup
  2. Now go to reconciliation mapping of the GTC connector and in the reconciliation staging dataset add a column and name it RECONCILE_STATUS and choose “Create mapping with Translation” option and map the USER_STATUS field  to the the reconciliation lookup literal which has the mapping for translation.  Refer to the below screenshots.

    statusreconcile_trans1
    statusreconcile_trans2
  3. After the above mapping map is done map the new RECONCILE_STATUS column in Reconciliation staging dataset to the OIM_OBJECT_STATUS field in the OIM account data set.

    reconcileoimobjectstatus

This completes the mapping for the GTC connector now you can test the provisioning and  reconciliation.To test the reconciliation change the status from A TO D in the target database table and run the OIM reconciliation scheduler which in turn will generate the event and the resource will be disabled in OIM for the particular user.

The following screenshot shows how the entire mapping looks like.

gtcscreen

Friday, January 4, 2013

Target Reconciliation Scheduler in OIM 11g

In this post i am sharing the process to make your own custom reconciliation connector. The process flow of the scheduler is shown in the below flowchart.

 

customtargetreconciliation

This flow chart is a simplified version and should serve as a simple aid. The code for the 11g scheduler is shown below, it is a pretty crude implementation and just serves as a POC it does a full target reconciliation.

public class UserDetailReconTask extends TaskSupport {
private static final ADFLogger reconLogger =
ADFLogger.createADFLogger(UserDetailReconTask.class);

public UserDetailReconTask() {
super();
}


public void execute(HashMap hashMap) {
String methodName =
Thread.currentThread().getStackTrace()[1].getMethodName();
tcITResourceInstanceOperationsIntf itRes =
Platform.getService(tcITResourceInstanceOperationsIntf.class);
reconLogger.entering(methodName, hashMap.toString());
//get it resourcename
String itResourceName = hashMap.get("ITResource").toString();
//Get resource object to reconcile
String resourceObjectName = hashMap.get("ResourceObject").toString();
//Get table name
String tableName = hashMap.get("TableName").toString();
HashMap hashmap = new HashMap();
if (reconLogger.isLoggable(Level.INFO)) {
reconLogger.info("[ " + methodName + " ] " +
"Got It Resource name " + itResourceName);
}
hashmap.put("IT Resources.Name", itResourceName);
tcResultSet rss;
tcResultSet parameters;
HashMap paramsMap = new HashMap();
try {
rss = itRes.findITResourceInstances(hashmap);
Long ll = rss.getLongValue("IT Resource.Key");

parameters = itRes.getITResourceInstanceParameters(ll);
for (int i = 0; i < parameters.getRowCount(); i++) {
parameters.goToRow(i);
String paramName =
parameters.getStringValue("IT Resources Type Parameter.Name");
if (paramName.trim().equalsIgnoreCase("DatabaseName")) {
paramsMap.put("DatabaseName",
parameters.getStringValue("IT Resource.Parameter.Value"));
} else if (paramName.trim().equalsIgnoreCase("URL")) {
paramsMap.put("URL",
parameters.getStringValue("IT Resource.Parameter.Value"));
} else if (paramName.trim().equalsIgnoreCase("UserID")) {
paramsMap.put("UserID",
parameters.getStringValue("IT Resource.Parameter.Value"));
} else if (paramName.trim().equalsIgnoreCase("Password")) {
paramsMap.put("Password",
parameters.getStringValue("IT Resource.Parameter.Value"));
} else if (paramName.trim().equalsIgnoreCase("Driver")) {
paramsMap.put("Driver",
parameters.getStringValue("IT Resource.Parameter.Value"));
}
}
} catch (tcAPIException e) {
reconLogger.severe("[ " + methodName + " ] " +
"error occured during retrieving IT Resource",
e);
throw new RuntimeException("[ " + methodName + " ] " +
"error occured during retrieving IT Resource");
} catch (tcColumnNotFoundException e) {
reconLogger.severe("[ " + methodName + " ] " +
"error occured during retrieving IT Resource column name",
e);
throw new RuntimeException("[ " + methodName + " ] " +
"error occured during retrieving IT Resource column name");
} catch (tcITResourceNotFoundException e) {
reconLogger.severe("[ " + methodName + " ] " +
"error occured during retrieving IT Resource by key",
e);
throw new RuntimeException("[ " + methodName + " ] " +
"error occured during retrieving IT Resource by key");
}
reconcileAndCreateEvents(paramsMap.get("UserID").toString(),
paramsMap.get("Password").toString(),
paramsMap.get("Driver").toString(),
paramsMap.get("URL").toString(),
resourceObjectName, tableName);

reconLogger.exiting("UserDetailReconTask", methodName);
}

public HashMap getAttributes() {
return null;
}

public void setAttributes() {
}


/**
* This method gets the data from the source table and then creates the events after
* that OIM applies the rules to check whether the user is there or not
* @param adminID Source admin user id
* @param password Source Password
* @param Driver Driver type
* @param Url jdbc url of the target database
* @param resourceObject target resource object
* @param tableName The target table to reconcile from
*/
private void reconcileAndCreateEvents(String adminID, String password,
String Driver, String Url,
String resourceObject,
String tableName) {
String methodName =
Thread.currentThread().getStackTrace()[1].getMethodName();
reconLogger.entering("UserDetailReconTask", methodName);
ResultSet rs = null;
Connection conn = null;
PreparedStatement ps = null;
try {
Class.forName(Driver);
} catch (ClassNotFoundException e) {
throw new RuntimeException("Unable to find the driver class", e);
}
try {
tcReconciliationOperationsIntf reconService =
Platform.getService(tcReconciliationOperationsIntf.class);
HashMap dataMap = new HashMap();
conn = DriverManager.getConnection(Url, adminID, password);
ps = conn.prepareStatement("Select * from "+ tableName);
rs = ps.executeQuery();
reconLogger.info("[ " + methodName + " ] " +
"Executed the query succesfully");
while (rs.next()) {
//put data in map
dataMap.put("UserLogin", rs.getString(1));
//create reconciliation event
reconLogger.info("[ " + methodName + " ] " + "Got login Id ",
rs.getString(1));
try {
//create reconciliation event
reconService.createReconciliationEvent(resourceObject,
dataMap, true);
reconLogger.info("[ " + methodName + " ] " +
"Created Recon Event", rs.getString(1));

} catch (tcObjectNotFoundException e) {
reconLogger.severe("Unable to find resource object");
throw new RuntimeException("Unable to find the driver class",
e);
} catch (tcAPIException e) {
reconLogger.severe("Unable to find resource object");
throw new RuntimeException("Unable to createevent", e);
}
}

} catch (SQLException e) {
throw new RuntimeException("Unable to get connection", e);
}
}
}


 



Here the method of relevance is reconcileAndCreateEvent which actually fetches the details from the target table name and then it is being used to populate a dataMap with reconciliation field names and value, this along with the resource object name is being used to create the reconciliation event which in turn is then processed by OIM Reconciliation engine which finds a reconciliation rule corresponding to the resource object and then applies the rule, in this case UserLogin is matched with User_Login in OIM and the account linking is performed.



The reconciliation API reference link is here.



The scheduler xml is shown below. It is named as UserDetailReconTask.xml and needs to be imported into mds using weblogicImportMetada.sh script.The  path needs to be /db/UserDetailReconTask.xml



<scheduledtasks xmlns="http://xmlns.oracle.com/oim/scheduler">
<task>
<name>UserDetailReconTask</name>
<class>com.blogspot.ramannanda.schedulers.UserDetailReconTask</class>
<description>Target Reconciliation</description>
<retry>5</retry>
<parameters>
<string-param required="true" encrypted="false" helpText="IT Resource">ITResource</string-param>
<string-param required="true" encrypted="false" helpText="Resource Object name">ResourceObject</string-param>
<string-param required="true" encrypted="false" helpText="Table Name">TableName</string-param>
</parameters>
</task>
</scheduledTasks>


The plugin xml is mentioned below



<?xml version="1.0" encoding="UTF-8"?>
<oimplugins>
<plugins pluginpoint="oracle.iam.scheduler.vo.TaskSupport">
<plugin pluginclass="com.blogspot.ramannanda.schedulers.UserDetailReconTask" version="1.0" name="TrustedSourceReconciliation"/>
</plugins>
</oimplugins>


Note: The actual implementations should filter the data from the target for performance by using something like last modified timestamp.