Documentum Composer: The Core Project

Artifact Concerns

As most of you will be aware every Documentum Composer workspace with at least one Documentum Project in it also has an additional project called the Documentum Core Project.  In this blog post I wanted to investigate this special project a bit.

The core project is created when you create your first regular project.  When using the Documentum Artifacts perspective it is hidden from the navigator view so you dont normally see it. 

image

But if you switch to the java perspective or something similar you will be able to.

image

It is a read-only project and also unlike regular projects it does not have an artifact builder.  So whilst it contains a set of artifacts.  They are never built and the core project can never installed.  So what is its purpose? 

As you will undoubtedly be well aware when you install a new Repository it is not completely empty.  It contains a set of objects; type definitions,  a folder structure, some document templates, some default access controls, some formats, etc.  The core project then acts as a reference project for this set of artifacts.  Every Documentum Project that you create references this core project. 

image

This in turn allows you to reference any artifact in that core project.   So, for example, you could create a new type definition whose supertype is the core project’s dm_sysobject.  Moreover when the dar installer installs your project and therefore your type artifact into the Repository as a docbase object, it will resolve the type’s supertype DM_ID reference to the r_object_id of the dm_sysobject type in that Repository. 

A type’s supertype is just one example.  Every Documentum object model exhibits these object references and therefore each instance of an object model forms a complex graph of objects knitted together by these references.   And the core project helps Developers to build on top of and extend these core object models which together make up Documentum platform.

This referential project model is also an indication of several other important concepts.

  1. We expect Developers to create many, small, cross referencing projects  as opposed to a single large, monolithic, all encompassing project. 
  2. We expect an artifact to exist in one project only.  And we expect it to be referenced when required.  This is in contradiction to DAB’s drag it in model.
  3. Projects, and in fact references to projects, should be versioned.

Of course in some ways Composer support the referential project model very well.  Install suffers from a general lack of error checking around pre-requisite projects.  It shouldn’t, for example, be possible to install project B without a compatible version of project A being installed already.  We also don’t support project versioning very well.  This support was planned for v1 but it unfortunately fell off of the list due to time and resource constraints.  My hope is and expectation is that we will be addressing this soon.

Composer also doesn’t support a full development lifecycle either.  In the majority of cases I would argue that it is not particularly desirable for us, or for our partners, to have to distribute source projects (for one thing it contravenes rule 2 above) just to enable Developers to extend ours and our partners offering.  But this is exactly what has to happen today.  For this reason I have always wanted projects to  reference binary dars as well as or, in fact,  instead of projects.   This would allow us and our partners to distribute Dar which makes much more sense.  To this end a few months ago I completed an elaboration/proof of concept demonstrating this capability so my hope is again that this will also be added soon. 

DFS

Another aspect to the core project is DFS Development.  At the moment a DocumentumProject is overloaded.  It is both an artifact project, supporting artifact development, and it is also a DFS project, supporting consumer and service DFS development.

This DFS development occurs against the DFS SDK which is not, unfortunately, a freely available, standalone distributable, like the JDK for example.  So we ship it with Composer inside one of our plug-ins.  Therefore to support the development of a DFS consumer or a service we have to deploy the SDK from the plug-in to a known location.  We chose to deploy the SDK to the core project as this was a location we know will always exist. 

Over time we should see this design move towards the design most all IDEs employ to manage the JDK, where each project may be associated with one of potentially many external SDK installations registered with the IDE itself.   This will decouple Composer from a particular version of the SDK and allow for managed separate upgrades of Composer and the DFS SDK giving developers more freedom to upgrade Composer but stay developing against their current DFS SDK or vice versa.  The majority of the code is in place for this in the dfs plug-ins.  We just need to enhance the UI to support registering DFS SDKs and associating them with (DFS) projects.  Oh and we need to promote the DFS SDK itself as a freely available and downloadable distributable, on EDN 🙂

Conclusion

In this article we’ve taken a look at the Documentum Core Project, a part of Composer we have not looked at before but of some significance.  We have discussed some of its implications and touched upon some of the (hopefully) not too distant enhancements.

As always, questions gratefully received.

Happy Composing!

Creating a DFS Services Client using Documentum Composer

One of my readers left a comment saying that my previous post left him high and dry when it came to creating the client project for testing his service.

So I put this very quick post together to describe it for him and hopefully for others too.

First this to note here is that there are lots of ways to create a DFS Client spanning both Java or .Net.  This is just one way using Documentum Composer.

So where did the previous post end up.  Well we had created a Documentum Project and within that created a DFS service, exported it as a service archive and deployed it onto a suitable application server. 

So what next?

Create yourself another Documentum Project (File->New->Other->Documentum Project).

Next create yourself a lib folder and in it import (Import->File System) your services remote jar.  This will have been exported at the same time as your service archive.  If you called your service archive samples for example then you would look for samples-remote.jar.

Once imported add it to your java build path (<project>->Properties->Java Build Path->Libraries->Add JARs…  Navigate to your remote jar within your client project and select it.  You should end up with the equivalent of this:-

image

You also want to make sure you have the right DFS client class path configured.  So whilst you are on this dialog.  Select the “DFS Services Library” build path entry and click Edit…  Select an appropriate library type for your scenario.  I’ve chosen “DFS Remote Client Library with UCF”:-

image

Click Finish.

Finally you need to write some code to call your service.  There are lots of contexts that your code could execute within; a Main class, a java application, an eclipse application, lots of others too.  I’m a big fan of test-driven development so I like to create Junit tests as these can be automated later and become part of the applications test suite.   Let’s create one of these (File->New->Other->Java->Junit->Junit Test Case):-

image

Give your Junit test a Package and a Name and make sure you’ve chosen to create a constructor stub.  Don’t worry about the super class warning – just click Finish and you should see this dialog:-

image

Obviously we do want to add Junit 3 library to the project’s class path so choose to perform that action.  Click Ok. 

Composer should do that for you and open your newly created Junit test case in the java editor.  We now need to add our service orchestration code.   Add a method called test with the following signature shown:-

image

And then add your orchestration code to that method.  If your not sure how to get started with this then a great place to start here is the client code that ships with the DFS SDK.  Cut and paste in a sample remote client and re-purpose it for your needs by changing the service related calls to use your service.  I’m actually using the DFS Hello World sample for this article.  As you are adding code, especially if you cut and paste a sample, you’ll end up with a bunch of problem markers:-

image

Because we already configured the project with a DFS class path you can hover over the problem marker in the sidebar and use the quick fix to add the necessary DFS imports.

A good tip here is to note that the moduleName and contextRoot (in the code sample above) must match the equivalent settings that you configured for your services project and that are also specified in the Export Service Archive dialog:-

image

That is pretty much it.  Make sure your services are deployed and that the server is running and the wsdl for each service is available.

To run the Junit test right-click on it in the project explorer and select Run As->Junit Test.  This will execute your test case using the Junit test framework and display the results.

If you want to debug the code then set a break point in the code and select Debug As->Junit Test instead.

Alternative

Now it is important to note that DFS supports two modes of invocation; remote – as we outlined above.  But it also supports local invocation too. 

This provides us with an alternative way to test our service code with less fixture than the remote alternative; i.e. without having to deploy it to a server.  

First we also need to add some of the services’ resources to the services project’s class path so that the DFS runtime can find them.  The Ear is configured in such a way that these same resources are available on the class path when the Ear is deployed on an app server.  We are just mimicking this configuration. 

So right-click on your services project and select Properties ->Java Build Path->Libraries.  Click Add Class Folder.  Navigate down your services project.  Expand the Web Services folder and the bin folder.  And check the gen-rsc folder.  Click Ok.   You should see this folder added t0 the build path:-

image

That’s the services project configured.

Now create another Documentum client project.

This time round make sure you add your service project to the Java Build Path (<client project>/Properties->Java Build Path->Projects):-

image

As before edit the type of DFS Library Path but select “DFS Local Library”.

There is no need to add the remote jar as we will pick up the service classes directly from the project.

As before add the Junit Test Case.  If you get any compilation errors then quick fix them.  Note, that when you add your services code you must call getLocalService instead of getRemoteService AND you don’t need to register the service context with the ContextFactory ahead of time.  So your service invocation code simply becomes:-

ContextFactory contextFactory = ContextFactory.getInstance();
IServiceContext context = contextFactory.newContext();

ServiceFactory serviceFactory = new ServiceFactory();  
IHelloWorldService service = serviceFactory.getLocalService(
               IHelloWorldService.class, context);

And that’s it.  You are ready to go.  Right-click on the Junit class in the project explorer and select Run As (or Debug As)->Junit Test.

Conclusions

So we’ve walked through creating a remote DFS Consumer and looked at an alternative for creating a local DFS consumers that leverages DFS’s local invocation facility.

As always I await your feedback.  In the meantime…

Happy Composing!

Documentum Development: Past, Present and Future

 

The Past

Using DAB the mode de jour was to develop against a Docbase; live, connected, whatever you want to call it.  You nominated a "System of Record" Docbase.  You then went about the business of defining all the artifacts; types, lifecycles, workflows, relations and so on and so forth, that collectively made up your Docapp (a DOCbase APPlication).

Your application(s) code would never execute against these particular artifacts, in this particular Docbase; because they really were just the definitions of resources that your application requires to be present in a production Docbase in order to run correctly. Meanwhile, the rest of your application artifacts; java classes, servlets & jsps, XML, etc, were all managed out of a source control system.

To prepare a content-rich application for distribution. Release engineering (releng) would checkout all the application artifacts, build and package them as an installer or as a war or an ear.  They would then also export the associated Docapp as a Docapp Archive and place with the application package.  These resources were then placed on a download site where customers could find them.

To install the application the customer would use the installer or deploy the war or ear. He would then use DAI to install the docapp archive into one or more production Docbases.

So, that is how it was.  The System of Record for your source artifacts was the Docbase itself.  But what was so wrong with that I hear some of your ask?  Well a few things as it happens:-

1. Having different Systems of Record is just plain wrong, and what made us so special anyway?
2. Versioning semantics for Docbase artifacts is not always the same as they are for source control, complicating development, histories, labelling and snapshoting
3. Releng needed to have and to know Docbases and to construct Docbases processes (not all of which were that automatable) which was hugely burdensome for them

With reference to the versioning semantics.  It is important for us all to recognize that the Docbase is just plain bad at development versioning.  For example, types in a Docbase cannot be versioned but code certainly depends on particular versions of types.  Now if we could have reliably versioned artifacts then we can support explicit versioned dependencies, even when the target environment does not support it. Explicit versioned dependencies allow us to bind code to specific versions of Docbase resources and have those dependencies validated.  The upsides of reliable versioning are hopefully evident.

The Present

Recognizing that content management would soon standardize through efforts like CMIS.  Recognizing that this would foster and entire industry of organizations building content-rich applications we knew we would have to address these shortcomings and move more mainstream.

So we built Composer and a system of linked XML-based artifacts and an install process that can transform these XML-based artifacts into Docbase objects, preserving the links by converting them to Docbase links; i.e. OIDs.

No longer is there a need for a "System of Record" Docbase.  Docbase artifacts can now be developed in the same eclipse IDE right alongside all your other application artifacts.  They can also be managed out of the same source control system, giving us reliable version semantics and greatly simplifying the releng process.

The Future

OK so we have ironed out that little wrinkle.  Docbase artifacts are now analogous to other source artifacts and therefore we are all now followers of the same development model.   In this respect we’ve become a bit more standard and hopefully removed a big barrier to entry for release engineering departments.  We believe this will help promote EMC|Documentum as the platform for developing content-rich applications.

So what’s next?

In a word "OSGi".

A phenomenon is sweeping the java community in the form of OSGi, a dynamic module system for Java. Most application servers are OSGi-based already.  And this phenomenon will undoubtedly impact the ECM community too. 

OSGi is mature.  It is over 10 years old and has grown out of embedded systems world.  So this is not new technology by any stretch.  It may just be new to some of us.  It is also standards-based. 

OSGi promotes a world of modules (bundles in their parlance) that can be dynamically inserted (or removed) into any live, running system with no interruptions in the services it provides.  Each bundle has versioned dependencies that the system will honour and enforce these.  And a system can support multiple versions of a module at any one time.

For all these reasons (and many others) I believe that OSGi is the perfect fundamental building block for next generation ECM solutions and solution frameworks.

It is also very important to recognize that in this future world of solutions and solution frameworks we may well also see direct, application hosted, manipulation of our XML systems of record.  As well as continuing to support the more traditional offline development model, both will co-exist quite happily side-by-side.  This direct manipulation will be supported by a family of web-based tooling built on the central core of the Composer tooling platform.

But this is still a little way in the future.  Because OSGi will effect how applications are delivered the most noticeable change you will see in the short-term will be to our friend the dar (Documentum Archive) which I believe is already set to become an OSGi bundle.  This will prepare the way for it to be just like every other piece of the solution.  Developed the same way, delivered the same way.  Depended upon, dependent upon others.  Really binding into the application as the vital constituent part that it really is. 

Conclusions

So what conclusions should you draw from this? 

Well if you are considering developing a content rich application (or larger) on Documentum then you need to be following Composer’s lead and adopting model-driven development.  Making XML models a central tenet of your development practices is the right thing to do.  Leverage Composer, the tooling platform, and in particular its EMF-based modelling and your not going to go too far wrong. 

Also getting more involved in Composer, the tooling platform, by extending it for your purposes by adding your own Docbase artifacts would be an excellent way to introduce yourself to the wonderful world of OSGi.

Happy Composing

DarDocs

Besides the biggy of source control another advantage of having an XML-based offline system of record of our Documentum artefacts is that we can use standard XML tools on those artefacts.

For example you could generate HTML docs  from them; DarDocs if you will.

Take a transform like this:-

<!–XML Transform to generate HTML from Documentum Type Artefacts–>
<xsl:stylesheet version="1.0"
xmlns:xsl="
http://www.w3.org/1999/XSL/Transform" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"  xmlns:lxslt="http://xml.apache.org/xslt"

xmlns:xmi="http://www.omg.org/XMI" xmlns:Artifact="http://documentum.emc.com/infrastructure/12012007/artifact" >
<xsl:template match="/">
  <html>
  <body> 
     <table border="1"> 
     <xsl:for-each select="Artifact:Artifact/dataModel/type">
        <tr> 
           <td>Document Type</td> 
           <td><xsl:value-of select="@name"/></td>
        </tr>
        <tr> 
           <td>Super Type</td> 
           <td><xsl:value-of select="substring-before(substring-after(super_name/@href,’name=’), ‘#’)"/></td>
        </tr> 
        <tr bgcolor="#9acd32">
           <th>Attribute Name</th>
           <th>Attribute Type</th>
           <th>Attribute length</th>
           <th>Attribute Label</th>
           <th>Help Text</th>
           <th>Comment</th> 
        </tr> 
      <xsl:for-each select="primaryElement/attributes"> 
        <tr> 
           <td><xsl:value-of select="@attr_name"/></td> 
           <td><xsl:value-of select="attr_type/@values"/></td> 
           <td><xsl:value-of select="@attr_length"/></td> 
           <td><xsl:value-of select="attrAnnotations/locales/@label_text"/></td> 
           <td><xsl:value-of select="attrAnnotations/locales/@help_text"/></td> 
           <td><xsl:value-of select="attrAnnotations/locales/@comment_text"/></td> 
        </tr> 
      </xsl:for-each> 
     </xsl:for-each>
     </table>
  </body>
  </html>
</xsl:template>

</xsl:stylesheet>

And a simple ant task like this:-

<?xml version="1.0" encoding="UTF-8"?>
<project default="generateDocs">

   <target description="Generate DarDocs" name="generateDocs"> 
      <xslt basedir="Artifacts" destdir="Documents"  style="style/DocTypes.xsl"> 
         <mapper type="glob" from="*.type" to="*.html"/> 
      </xslt> 
   </target>
</project>

And run it over a project with a few types in and you should get an html file for each type that looks something like this:-

Document Type soptype
Super Type dm_document
Attribute Name Attribute Type Attribute length Attribute Label Help Text Comment
attr1 STRING 10 attr1_label Some user help on attr1 Developer comment for attr1
attr2 INTEGER 10 attr2 label Some user help for attr2 Developer comment for attr2

It would be simple enough for a release engineer to add this to your releng build so that each build also produces a set of DarDocs for your Dars.

Hat tip to Eddie Doey from UK Consulting for providing this great example.

Happy Composing!

Migrating your projects between versions of Documentum Composer

As we release new versions of Documentum Composer you need to migrate your projects to this new version.  But how do you do this?

It’s pretty simple actually.  First-time you start the new version of Composer create yourself a new workspace.  Then re-import your projects into that new workspace using the Documentum-specific projects import wizard; File->Import->Documentum->Existing Projects Into Workspace:-

image

Any model changes that need to be applied to your artifacts will occur during this import.

The astute will realize that this is an irrevocable operation so your team will all need to use the same version of Composer.

Happy Composing!