Ginkgo4J

Having been an engineer using Java since pretty much version 1 and having practiced TDD for the best part of 10 years one thing that always bothered me was the lack of structure and context that Java testing frameworks like Junit provided.  On larger code bases, with many developers this problem can become quite accute.  When pair programming and working on existing tests I have sometimes even had to say to my pair

“Give me 10/15 minutes to actually figure out what this test is actually doing”

Because the fact of the matter is that the method name is simply not enough to convey the required given, when, then semantics context that are present in all tests.

I recently made a switch in job roles.  Whilst I stayed with EMC, I left the Documentum division (ECD) with whom I had been for 17 years and moved to a new division called the Cloud Platform Team (CPT) whose remit is to help EMC make a transition to the 3rd platform.  As a result I am now based in the Pivotal office in San Francisco and I pair program and I am now working in Golang.

Golang has a testing framework called Ginkgo that was actually created by one of Pivotal’s VPs Onsi Fakhouri.  It mirrors frameworks from other languages like RSpec in Ruby.  All of these framework provide a very simply DSL that the developer can use in his test to build up a described context with closures.  Having practiced this for the last six months I personally find this way of writing tests very useful.  Perhaps the most useful when I pick up an existing test and try to modify it.

Now, since version 8 Java has included it’s version of closures, called Lambda’s.  Whilst there aren’t quite as flexible as some of their equivalents on other languages.  All variable access must be to ‘finals’ for example.  They are sufficient to build an equivalent  testing DSL.  So that’s what I decided to do with Ginkgo4J, pronounced Ginkgo for Java.

So let’s take a quick look at how it works.

In your Java 8 project, add a new test class called BookTests.java as follows:-

package com.github.paulcwarren.ginkgo4j.examples;

import static com.github.paulcwarren.ginkgo4j.Ginkgo4jDSL.*;
 import org.junit.runner.RunWith;
 import com.github.paulcwarren.ginkgo4j.Ginkgo4jRunner;

@RunWith(Ginkgo4jRunner.class)
 public class BookTests {
{
  Describe("A describe", () -> {
  });
 }
 }

Let’s break this down:-

  • The imports Ginkgo4jDSL.* and Ginkgo4jRunner add Ginkgo4J’s DSL and JUnit runner.  The Junit runner allows these style of tests to be run in all IDEs supporting Junit (basically all of them) and also in build tools such as Ant, Maven and Gradle.
  • We add a top-level Describe container using Ginkgo4J’s Describe(String title, ExecutableBlock block) method.  The top-level braces {}trick allows us to evaluate the Describe at the top level without having to wrap it.
  • The 2nd argument to the Describe () -> {}is a lambda expression defining an anonymous class that implements the ExecutableBlock interface.

The 2nd argument lamdba expression to the Describe will contain our specs.  So let’s add some now to test our Book class.

private Book longBook;
private Book shortBook;
{
 Describe("Book", () -> {
   BeforeEach(() -> {
     longBook = new Book("Les Miserables", "Victor Hugo", 1488);
     shortBook = new Book("Fox In Socks", "Dr. Seuss", 24);
   });

   Context("Categorizing book length", () -> {
     Context("With more than 300 pages", () -> {
       It("should be a novel", () -> {
         assertThat(longBook.categoryByLength(), is("NOVEL"));
       });
     });

     Context("With fewer than 300 pages", () -> {
       It("should be a short story", () -> {
         assertThat(shortBook.categoryByLength(), is("NOVELLA"));
       });
     });
   });
   });
 }

Let’s break this down:

  • Ginkgo4J makes extensive use of lambdas to allow you to build descriptive test suites.
  • You should make use of Describe and Context containers to expressively organize the behavior of your code.
  • You can use BeforeEach to set up state for your specs. You use It to specify a single spec.
  • In order to share state between a BeforeEach and an It you must use member variables.
  • In this case we use Hamcrest’s assertThat syntax to make expectations on the categoryByLength() method.

Assuming a Book model with this behavior, running this JUnit test in Eclipse (or Intellij) will yield:

Screen Shot 2016-06-02 at 8.31.35 AM

Success!

Focussed Specs

It is often convenient, when developing to be able to run a subset of specs. Ginkgo4J allows you to focus individual specs or whole containers of specs programatically by adding an F in front of your Describe, Context, andIt:

 FDescribe("some behavior", () -> { ... })
 FContext("some scenario", () -> { ... })
 FIt("some assertion", () -> { ... })

doing so instructs Ginkgo4J to only run those specs. To run all specs, you’ll need to go back and remove all the Fs.

Parallel Specs

Ginkgo4J has support for running specs in parallel.  It does this by spawning separate threads and dividing the specs evenly among these threads.  Parallelism is on by default and will use 4 threads.  If you wish to modify this you can add the additional annotation to your test class:-

@Ginkgo4jConfiguration(threads=1)

which will instruct Ginkgo4J to run a single thread.

Spring Support

Ginkgo4J also offers native support for Spring.  To test a Spring application context simply replace the @RunWith(Ginkgo4jRunner.class) with @RunWith(Ginkgo4jSpringRunner.class) and initialize you test class’ Spring application context in exactly the same way as if you were using Spring’s SpringJUnit4ClassRunner

@RunWith(Ginkgo4jSpringRunner.class)
@SpringApplicationConfiguration(classes = Ginkgo4jSpringApplication.class)

public class Ginkgo4jSpringApplicationTests {
  @Autowired
  HelloService helloService;
  {
  Describe("Spring intergation", () -> {
    It("should be able to use spring beans", () -> {
      assertThat(helloService, is(not(nullValue())));
    });

    Context("hello service", () -> {
      It("should say hello", () -> {
        assertThat(helloService.sayHello("World"), is("Hello World!"));
      });
    });
   });
   }

   @Test
   public void noop() {
   }
 }

The nooptest class is currently required as Spring’s junit runner asserts on at least one test class existing.

Trying it out for yourself

Please feel free to try it out on your Java projects.  For a maven project add:-

<dependency>
    <groupId>com.github.paulcwarren</groupId>
    <artifactId>ginkgo4j</artifactId>
    <version>1.0.0</version>
</dependency>

or for a Gradle project add:-

compile 'com.github.paulcwarren:ginkgo4j:1.0.0'

for others see here.

Advertisement

Role-based SPAs with AngularJS and Spring HATEOAS

Most, if not all, modern enterprise software needs to have a role-based user interface to support common-place use cases like, for example, devolved administration; i.e. User X is an coordinator for Group Y, allowing him or her to perform admin functions for that Group.  This is just an example, of course, there are plenty of others.

Spring provides support for many common authentication and authorization patterns and standards like OAuth2 and user management through LDAP or CAS.  It also supports maturing REST models such as hypermedia through Spring HATEOAS and Spring Data REST.

On the client-side, enterprises usually like to consolidate on a UI technology so that their apps have a similar look and feel.  Obviously, angularjs is right up there (at the time of writing anyways…polymer anyone?) driven largely, I think, by its extensibility.  Seems now-a-days that pretty much any service or directive you can think of is out there in github as a bower component.  One such extension, pertinent to this article, being angular-hal

I wanted to have a look at how one might implement a role-based UI in a modern day SPA by pairing these technologies together.

The complete code for this article can be found in my github repo here.

So, before we begin let’s set some context.  We are operating within a Spring boot-based web application, secured with a handy (for demo purposes) in-memory user database.  To navigate this application it will offer a hypermedia-based REST API for managing these users.  Client-side, we then front this hypermedia API with a role-based UI allowing ADMINs to add or remove users, allowing ADMINS or COORDinators to reset passwords, while USERs can simply just view users and do nothing else.

The type of resource doesn’t really matter, the pattern presented here applies regardless.  I chose user management as it meant the app could be self-demonstrating, and that appealed to me.

The app in github, especially security-wise, is based on Dave Syer’s blog post.  This was part II of a (currently) VI part series.  Excellent, by the way, and I thoroughly recommend checking them out.  Part II was adequate for my purposes.  From a HATEOAS perspective I also used Greg Turnquist’s blog post as a jumping off point.  And from a hypermedia standpoint I am basing my approach on Oliver Gierke’s RESTbucks presentations.

The App

Features and functionality of the app are very simple indeed.  You can log in and out and manage users.  Initially, there is only one user; username wozza, password dotio and this users is an ADMIN so he can create other users and set passwords. 

The idea therefore is to log in as wozza and create other users with different roles, log out and log in as those users to see how it affects the usability of the app.

So, if I log in as wozza:-

image

All functions are available to us and we can create a new user fred_flintstone, who is a COORDinator:-

image

Once created we can then give Fred a password:-

image

And then we can logout and log back in as Fred:-

image

If we do that we can now you can see Fred, who is a COORDinator, can reset passwords but can’t add or remove users.

As wozza we could add a third user, Barnie Rubble for example, with role USER and then we would see that he can view users only but I’ll leave that up to you.  You get the picture.

Server-side

On the server-side I have configured form login-based security that also uses a custom user details service.  This is a really, really simple in-memory user database.  So, note if you reboot the app you are back to the wozza user only.

Hypermedia-wise I have a root SiteController that is the entry-point, offered at /api.  This links (via a linkrel) to the UsersController that manages the collection of users and this in turn links to the UserController for managing individual user resources including setting their password.  Therefore from the single known entry point /api the client can follow the “users” linkrel to get the set of users and potentially follow the “password” linkrel for an indiivdual user to update the their password or follow the “delete” linkrel to remove the user from the system altogether.

Important to this article is that these controllers are role-aware and offer these linkrels IF and only if the current principal has the appropriate authority.  Let’s have a look at an example of this:-

image

Here we can see that in the UsersController.listUsers request handler we have injected (using standard Spring MVC) the security context’s Authentication object.  This then allows us to either add links to the users resource, or not.  In this example we are adding the self linkrel regardless (because users collection is available to everyone) but only adding the create linkrel if the current principal has ADMIN authority; i.e. only admins can create new users.

Likewise, for the user resource we do similar:-

image

Again we can see the self linkrel is added regardless, everyone can see a “user”.  But we only add the password linkrel if the current principal is an ADMIN or a COORDinator.  And only add the delete linkrel if the principal is an ADMIN.

When authenticated as an ADMIN we would then see this sort of response after following the users linkrel:-

{
  "_links" : {
    "self" : {
      "href" : "
http://localhost:8080/users"
    },
    "create" : {
      "href" : "
http://localhost:8080/users"
    }
  },
  "_embedded" : {
    "users" : [ {
      "username" : "rubble_barnie",
      "roles" : [ "USER" ],
      "_links" : {
        "self" : {
          "href" : "
http://localhost:8080/users/rubble_barnie"
        },
        "password" : {
          "href" : "
http://localhost:8080/users/rubble_barnie/password"
        },
        "delete" : {
          "href" : "
http://localhost:8080/users/rubble_barnie"
        }
      }
    }, …

Note, the presence of the create linkrel on the users resource itself and the password and deleted linkrels on the user resource.

Compare that with a response after following the same users linkrel, when authenticated as a COORDinator:-

{
  "_links" : {
    "self" : {
      "href" : "
http://localhost:8080/users"
    }
  },
  "_embedded" : {
    "users" : [ {
      "username" : "rubble_barnie",
      "roles" : [ "USER" ],
      "_links" : {
        "self" : {
          "href" : "
http://localhost:8080/users/rubble_barnie"
        },
        "password" : {
          "href" : "
http://localhost:8080/users/rubble_barnie/password"
        }
      }
    }, …

Where, as you will probably expect by now, we don’t have the create or delete linkrels but do still have the password linkrel.

A note before we move on.  Clearly, this approach wont stop any form of malicious attack which is why we still need “security in depth” in the real-world.  To show this you will see I have also configured security on the actual resources themselves, as follows…

image

…so we aren’t simple relying on the web client, which is inherently weak of course.  Any malicious attempt to call a resource without proper authentication is going to result in a 401 Unauthorized.

Client-side

OK, so the backend programming model is simple.  Hopefully, it is fairly obvious how we can use the presence or absence of these linkrels on the client-side to now show, hide or disable components of the user interface.  So, let’s take a look at that next.

There are several angular modules that support the application/hal+json mediatype.  As mentioned in the opening I chose angular-hal primarily because as it was small, simple and seemed to do what I required as I was initially thinking about this.

To set some context here as well this is just a standard angularjs app BUT important to this article is that the hypermedia approach of linkrel following allows us to consolidate on a single angular-hal service, halClient, for interacting with the REST API regardless of the type of resource.  This is important.  I have found that this replaces the many type-specific services that I have been writing up to this point (I don’t like $resource before any one asks). 

To kick everything off, in the application controller we bootstrap the client by using this halClient service to invoke the REST API entry point /api:-

.controller(‘appController’, [‘$rootScope’, ‘$scope’, ‘halClient’,
        function($rootScope, $scope, halClient) {

            $scope.root = function() {
                halClient.$get(‘/api’, {
                    linksAttribute : "_links"
                }).then(function(resource) {
                    $rootScope.resource = resource;
                });
            };

            $scope.root();

        } ]);

this returns the top-level resource, as follows:-

{
  "_links" : {
    "self" : {
      "href" : "
http://localhost:8080/api"
    },
    "users" : {
      "href" : "
http://localhost:8080/users"
    }
  }
}

and halClient will decorate this object with a bunch of methods:-

image

allowing us to re-use this object to follow these linkrels further into the server’s REST API.  Note, that we save this object to $rootScope.resource so that we can refer to it from anywhere.

As you might expect then, in the “users” route, we use $rootScope.resource.$get to follow the “users” linkrel, as you can see here:-

    $routeProvider.when(‘/users’, {
        templateUrl : ‘/js/users/users.html’,
        controller : ‘usersController’,
        resolve: {
            usersResource: function($rootScope) {
               if ($rootScope.resource)
                 return $rootScope.resource.$get(‘users’, {
                        linksAttribute: "_links",
                        embeddedAttribute: "_embedded"
                    });
               else
                    return {};
                }
        }
    })

that, when authenticated as an ADMIN, gets the following response from the server:-

{
  "_links" : {
    "self" : {
      "href" : "
http://localhost:8080/users"
    },
    "create" : {
      "href" : "
http://localhost:8080/users"
    }
  },
  "_embedded" : {
    "users" : [ {
      "username" : "wozza",
      "roles" : [ "ADMIN" ],
      "_links" : {
        "self" : {
          "href" : "
http://localhost:8080/users/wozza"
        },
        "password" : {
          "href" : "
http://localhost:8080/users/wozza/password"
        },
        "delete" : {
          "href" : "
http://localhost:8080/users/wozza"
        }
      }
    } ]
  }
}

Note the presence of the create linkrel on the users collection itself and the presence of the password and delete linkrels on the “wozza” user resource.

We then use this usersResource response object to bootstrap the users UI:-

app.controller(‘usersController’, [‘$scope’, ‘usersResource’,
        function($scope, usersResource) {

            $scope.resource = usersResource;
           
            usersResource.$get(‘users’)
                .then(function(users) {
                    $scope.users = users;
                });

by storing the usersResource as the local scope variable resource and by storing the embedded users collection in local scope variable users.

An interesting aspects of this, that I havent had time to investigate yet, is how application navigation typically maps to REST API navigation.

I didn’t use it in this simple demo app, but I suspect in a more complex application using the ui-router (and its nested views) it will likely end up that as you navigate deeper into the application, you will also navigate deeper into the REST API, thus this $scope.resource overriding pattern might prove quite natural.

Ultimately, in the HTML template we use this local scope variable resource to show/hide, enable/disable various aspects of the UI:-

<table class="table">

    <tr ng-show="resource.$has(‘create’)">
        <td><input type="text" placeholder="Enter username" ng-model="newUser.username"></td>

        <td><button class="btn" ng-click="onClickAdd(newUser)">Add</button></td>
        <td></td>
    </tr>
    <tr ng-repeat="user in users">
        <td>{{user.username}}</td>
        <td>{{user.roles}}</td>
        <td><button class="btn" ng-disabled="!user.$has(‘delete’)" ng-click="onClickRemove(user)">Remove</button></td>
        <td><button class="btn" ng-disabled="!user.$has(‘password’)" ng-click="onClickSetPassword(user)">Set Password</button></td>
    </tr>
</table>

You can see here that we show the table “create” row only if the create linkrel is present.  Similarly, we disable (rather than hide this time) the Remove and Set Password buttons based on the presence or absence of the delete and password linkrels.  It is important to note here that we are checking the user object here, not the users collection.

In Conclusion

That is about as far as I will go for this article. 

I like this approach as it gives really easy to understand and implement programming paradigm on both the server and the client.  On the server-side Spring, Spring MVC, Spring HATEOAS and Spring Data REST have all made creating role-based controllers super easy.  Likewise, on the client, LuvDaSun has done a great job on angular-hal allowing the server’s hypermedia to be consumed with no effort leading to nice, concise client-code.

Now, with this great power, comes responsibility.  Just because we can do this, doesn’t necessarily make it right?  And that is a more philosophical question where I would like to get some feedback on from my readership please. 

My take on it is that using Spring’s principal to decide whether, or not, to add linkrels to a resource is I think, considering prevailing trends towards micro-services and super-user sessions to backend data sources, well-aligned.  Conversely though, in more traditional on-premise enterprise platforms, like Documentum, authorization decisions are usually provided by the backend data source.  It is more secure.  However, what I have presented here could easily be adapted to defer these decisions to the data source instead.

Where Next?

There is lots I haven’t looked at here. 

I haven’t paid any attention to the excellent Spring Data and Spring Data REST.  Spring Data REST uses ALPS which is an integral part of a proper hypermedia API of course.  So, I would like to investigate adding ALPS.  For example, clients could discover what the password linkrel actually does and whether it is idempotent; i.e. a PUT, or not.

I would also like to look at using ResourceProcessor’s to take a similar approach to that proposed here to add role-aware linkrels to Spring Data REST resources.  I think this combo could be very enabling.

Thanks all. 

Configuring XCP Designer to use a different maven local repository (or .m2 cache)

Hi Folks,

Long time, no speak.  Apologies for being quite so silent for so long.  I hope this blog post finds you all well?   I am starting to build up a bank of XCP Designer tips and tricks so I’ll start blogging them here.

One of the questions that I often get about XCP Designer is:-

“Is it possible to have a separate .m2 cache for different instances of XCP Designer”.

(Which is usually different versions of designers).  And the answer is yes.

Under the covers of XCP Designer’s user interface is a completely standard instance of Eclipse and, important to this conversation, m2e.  And in fact the projects that XCP Designer creates are also completely standard maven projects (we instantiate maven archetypes actually).  What this all means is that anything you can do in eclipse and m2e or a maven project, you can pretty much do in XCP Designer.  The problem is that we hide all of the standard eclipse and m2e user interface and project structure, so it is often difficult to know how!

For the case we are dealing with here, configuring a separate .m2 cache for different instances of designer, m2e actually supports a workspace preference for pointing your eclipse workspace at a settings.xml file (details of configuring maven’s settings.xml can be found here).  Now m2e surfaces this in their preferences page but of course XCP Designer hides this! So we will configure it’s value via a property file instead and then use this preference to point m2e at a settings.xml file of our choosing in which we will configure the location of the local repository that we wish to use.  Here are the steps:-

  1. Customize your designer installation(s) by  using 7zip (or some equivalent) and adding the following entry to the plugin_customization.ini file in the <designer-basedir>/plugins/com.emc.xcp.builder.ui-<version>.jar

    org.eclipse.m2e.core/eclipse.m2.globalSettingsFile=c:/temp/my-settings.xml

  2. Then, in c:/temp/my-settings.xml, add the following configuration:-
    <settings>
      <localRepository>/path/to/local/repo/</localRepository>
    </settings>

That’s it.  Pretty simple when you know how huh?  In case it isn’t obvious if you just want XCP Designer to use a different local repository (as opposed to configuring different instances of XCP Designer) then step #1 is optional.

Have fun!

_Paul 

The SOFTWARE principle

I wanted to copyright my new mnemonic and was looking for an easy way to do so.  Since google indexes EVERYTHING and my blog posts are date-time stamped I thought I could copyright it by simply posting 🙂

So here is is.

“Use the SOFTWARE principle in your development!

S eparate API from implementation
O
SGi over J2EE
F
ind simplest/smallest solution
T
est drive your development
W
ork out your alternatives
A ssertions and checked exceptions
R euse dont re-write, and refactor often

E
ngage others early”

Documentum Composer and Eclipse Team

Now that we have Composer installed on Eclipse JEE we have an open tooling platform that we can extend.  One such extension is Eclipse Team that provides a source control API that most source control vendors; cvs (comes with eclipse anyway), svn, perforce, starteam, etc, have integrated with.  Right out of the box you can share your projects with other team members via most standard source control systems.  For demonstration I’m going to use the Subversive SVN provider here but the others all work in a similar way.

As part of the scenario we will be playing out the role of two developers by switching between two workspaces.

Installing the SVN Provider

First thing first though we need to install the Subversive SVN team provider.  Goto Help->Install New Software…  Select Helios in the “Work with” drop down and type “svn” into the filter box.  Select the Subversive SVN Team Provider (Incubation) and install it.

You will be prompted for the usual restart so go ahead and do that. 

Configuring the Team Provider for Documentum

Next we need to tell the team provider what to checkin.  You need to do this in the other workspace too and I will remind you to do so.  Goto Window->Preferences…->Team->Ignored Resources. 

Click Add Pattern… and add the following three patterns:-
*.artifact

Now repeat that adding the next two patterns:-
*.dar
/Web Services/bin/*

This tells all the team providers to ignore built artifact files (*.artifact), Documentum ARchives (*.dar) and all web service build artifacts.  There is no point in checking these in.

Sharing a Project

We now assume the role of the first developer.  He wants to share a project.  With the Documentum Artifacts perspective open create a new Documentum project to share.  As this is the first Documentum project in your workspace Composer will also create the core and the TCM reference project for you.  As these two upstream projects are both referenced by your project you need to check these in too.  This ensures your projects references will be satisfied when others check them out.  This principal stands for any referenced projects that you may add too. 

The core and tcm reference projects are hidden in the Documentum Artifacts perspective so open the Navigator view to show all three projects; Window->Show View->Navigator.  Now you can see all three projects, select the core project first, right-click and select Team->Share Projects…

This will bring up the SVN Share Projects Wizard.

Enter the URL for your SVN repo and the login details.  We can default everything else so just click Finish. 

SVN will then pop up the Commit dialog allowing you can enter a comment for the commit and to review the set of files that will be commited.  It is worth checking these to make sure there are no resources that should be ignored.

Click OK and the files will be added to the repository.  Once that’s done, repeat this process for the TCM reference project and lastly for you project.

Once all three projects have been shared your workspace should look something like this:-

Note the little orange repository icon in the bottom left hand corner of all your folders and files.  This is a visual indicator that these files are being managed with source control.

Import a Project

So now other developers are free to work on these projects too.  Let’s have a look at how they might do that. 

Switch to a new, empty workspace and show the Documentum Artifacts perspective.  Now set up the ignored resources as we did earlier.

Now import the projects that were shared by developer one.  Right-click in the Project Explorer and select Import->Import… to bring up the Import dialog.  Find and expand the SVN category and select Project from SVN.

Click Next to launch the SVN Import dialog.  On the first page enter the repository info as you did before.  Click Next to bring up the Select Resource dialog.

Select the Browse button.  Navigate to the DocumentumCoreProject and select it. 

Click OK to confirm the selection and return to the previous dialog.  Then click Finish to bring up the Check Out As dialog. 

Accept the defaults on this dialog and Click Finish to start the import.

Once the core project has been imported repeat the same process for the tcm reference project next and finally for your project.  Don’t forget the core and tcm reference projects won’t be visible in the Documentum Navigator so you can open one of the other navigator views to see them.

Once all three projects have been imported you should see something like this:-

Again, note the little orange repository icons indicating these projects are under source control.

Updating a Project

Naturally your developers are now going to update the project by adding new artifacts, editing others and possibly deleting some others still.  This is all part of the natural life-cycle of the project.

So switch workspace back to the first developer and let’s make a couple of changes.  Let’s add a new format artifact and edit the description of the dar folder install parameter.  Your workspace should now look something like this.

Note the arrow (>) next to items that have been modified.  And note also that these track all the way back up the tree to the project itself.  And the little question mark on the format artifact indicating it is new.  If you want to learn more about the SVN icons goto Window->Preferences->Team->SVN->Label Decorations for this dialog:-

Now submit these changes.  The first thing we do is merge out; i.e. grab all the changes made by others whilst we have been working on our change.  This ensures we are compatible and not blindly forcing a breaking change into the repository.  Right-click on the project and select Team->Update.  Obviously in this case no one else has made any changes yet.  But typically at this stage we would resolve any conflicts and re-test
our changes to ensure they are still good.  Then we know we are fit to check in.  Right-click the project and select Team->Submit… 

Add a submission comment and click OK.  Note that the SVN modified indicators all disappear and your changes have been submitted into the
repository.

In the meantime the developer that imported the project is now making changes too.  Let’s switch to his workspace and also make an edit to the description of the dar install folder parameter.

Resolving a Conflicting Change

You’ll notice that both developers have now changed the dar folder install parameter artifact.  The second developer is going to be made aware of this when he attempts to check his changes in. 

So still as the second developer do your merge out.  Right-click on the project and select Team->Update.  SVN will attempt to update your workspace.  It will grab any new, non-conflicting changes and attempt to resolve any conflicts.  It will update your project with the new format artifact.  However, it will be unable to resolve the conflict that has arisen with the dar folder install parameter.  You’ll be notified with the following dialog:-

Click OK to close the dialogs.  Now right-click on the project and select Team->Synchronize with Repository.  A dialog will pop-up asking if you want to open the Synchronize Perspective which is a collection of views for helping with synchronization tasks.  Choose Yes and the perspective will open:-

Right-click on the dar folder install parameter, the artifact that has the conflict, and select Edit Conflict.  This will open the XMI versions of the local and repository file highlighting the conflicts. 

The developer uses this editor to resolve the conflict in the normal way.  Once he is done he marks the file as merged by right-clicking on the artifact in the tree again and selecting Mark as Merged.  The artifact will be removed the file from the Synchronize view:-

Lastly, he right-clicks on the project and selects Commit and his changes, merged with others, are committed to the source control system.

Conclusion

In today’s post we’ve taken a look at the Subversive SVN Team Provider for eclipse and demonstrated how it can be used with Composer, unmodified, to share Documentum projects amongst a team of developers through an SVN source control system.

The same basic theory holds true except for the subtle differences between providers feature sets.  Over time I will try and demonstrate the same scenarios using CVS and Perforce.

In the meantime – Happy Composing!

Composer on Helios

I know that many of my readership do like to run Composer on Eclipse, opposed to running Composer the RCP product which is still based on Eclipse 3.4.  Composer the RCP product is a closed-box.  Composer on eclipse on the other hand is an open platform that you can extend through the software updates in order to leverage any of the multiple complimentary tool-sets that are out there.

So, to that end, a quick set of instructions on how to get Composer running on the latest version of Eclipse, 3.6, codenamed Helios.

Download and install the JEE version of eclipse from here.  The JEE version already contains most of the dependencies that Composer requires to run including EMF, GEF, etc.  Start it up and choose a workspace.  Once it has started the only additional package you need is the emf validation framework.  Goto Help->Install New Software…  Choose the “All Available Sites” or “Helios” in the “Work with” drop down and in the filter box type “emf validation framework” and you should see something like this:-

Select the SDK.  Click Next to review the installation.  Click Next again and accept the licence agreement.  Then click Finish to install.  Once it has been downloaded and installed into your Eclipse run-time you will be asked to restart – go ahead and do this.  Once it has restarted the install is complete.  Now close Eclipse again as you now need to add the Composer plugins.

Copy all folders and jars beginning “com.emc.ide%” from the features and plugins folders of your existing Composer 6.5 install into the dropins folder under a /composer/eclipse path.  It should look like this:-

Now restart Eclipse one more time and navigate to Help->About Eclipse to confirm the Documentum logo is there as shown in the following image:-

And your done.  As always the obligatory support statement. Composer isn’t officially supported in this configuration- although technically there is no difference.  Any issues you did find would need to be reproduced on Composer RCP before Tech Support would accept them.

Lastly, a small request from me.  Before I left on my LOA and now that I am back I am always actively encouraging EMC to host and support Composer as a set of plugins from an update site, like the rest if the Eclipse community, as well as Composer as a product.  But I am just one of a few voices saying this within EMC who obviously weigh our opinion up against the cost of hosting the update site.  So if you would like to consume Composer from an update site, if you would like Composer to be an open -platform it is important that you let EMC know this by telling them directly or by commenting here.

Ok that’s it.  In future posts I show you how you can leverage some of the complimentary tool-sets I mentioned earlier and that you now have available to you.  And obviously feel free to experiment and let us know what works for you.

Happy Composing

Hello Again!

Hi Folks,

After 14 months I’m back.  I didn’t leave EMC and return.  I took leave of absence in order to fulfill a commitment elsewhere.  

I’ve been back for a little under a month now.  A lot of things have changed, as you can imagine.  Sadly, a few colleagues have moved on.  You know who you are and I would like to take this opportunity to publicly thank them and to wish them all the best in their new positions.  I also have a few new colleagues and I would like to say hello to them.  I look forward to working with you.

I feel like I am getting my feet back under the table so my normal blogging service can now be resumed.  I have a couple of Composer posts lined up and I will hopefully be able to start blogging about all things XCP in the near future, stay tuned.

Cheers,
_Paul

Documentum Composer: The Core Project

Artifact Concerns

As most of you will be aware every Documentum Composer workspace with at least one Documentum Project in it also has an additional project called the Documentum Core Project.  In this blog post I wanted to investigate this special project a bit.

The core project is created when you create your first regular project.  When using the Documentum Artifacts perspective it is hidden from the navigator view so you dont normally see it. 

image

But if you switch to the java perspective or something similar you will be able to.

image

It is a read-only project and also unlike regular projects it does not have an artifact builder.  So whilst it contains a set of artifacts.  They are never built and the core project can never installed.  So what is its purpose? 

As you will undoubtedly be well aware when you install a new Repository it is not completely empty.  It contains a set of objects; type definitions,  a folder structure, some document templates, some default access controls, some formats, etc.  The core project then acts as a reference project for this set of artifacts.  Every Documentum Project that you create references this core project. 

image

This in turn allows you to reference any artifact in that core project.   So, for example, you could create a new type definition whose supertype is the core project’s dm_sysobject.  Moreover when the dar installer installs your project and therefore your type artifact into the Repository as a docbase object, it will resolve the type’s supertype DM_ID reference to the r_object_id of the dm_sysobject type in that Repository. 

A type’s supertype is just one example.  Every Documentum object model exhibits these object references and therefore each instance of an object model forms a complex graph of objects knitted together by these references.   And the core project helps Developers to build on top of and extend these core object models which together make up Documentum platform.

This referential project model is also an indication of several other important concepts.

  1. We expect Developers to create many, small, cross referencing projects  as opposed to a single large, monolithic, all encompassing project. 
  2. We expect an artifact to exist in one project only.  And we expect it to be referenced when required.  This is in contradiction to DAB’s drag it in model.
  3. Projects, and in fact references to projects, should be versioned.

Of course in some ways Composer support the referential project model very well.  Install suffers from a general lack of error checking around pre-requisite projects.  It shouldn’t, for example, be possible to install project B without a compatible version of project A being installed already.  We also don’t support project versioning very well.  This support was planned for v1 but it unfortunately fell off of the list due to time and resource constraints.  My hope is and expectation is that we will be addressing this soon.

Composer also doesn’t support a full development lifecycle either.  In the majority of cases I would argue that it is not particularly desirable for us, or for our partners, to have to distribute source projects (for one thing it contravenes rule 2 above) just to enable Developers to extend ours and our partners offering.  But this is exactly what has to happen today.  For this reason I have always wanted projects to  reference binary dars as well as or, in fact,  instead of projects.   This would allow us and our partners to distribute Dar which makes much more sense.  To this end a few months ago I completed an elaboration/proof of concept demonstrating this capability so my hope is again that this will also be added soon. 

DFS

Another aspect to the core project is DFS Development.  At the moment a DocumentumProject is overloaded.  It is both an artifact project, supporting artifact development, and it is also a DFS project, supporting consumer and service DFS development.

This DFS development occurs against the DFS SDK which is not, unfortunately, a freely available, standalone distributable, like the JDK for example.  So we ship it with Composer inside one of our plug-ins.  Therefore to support the development of a DFS consumer or a service we have to deploy the SDK from the plug-in to a known location.  We chose to deploy the SDK to the core project as this was a location we know will always exist. 

Over time we should see this design move towards the design most all IDEs employ to manage the JDK, where each project may be associated with one of potentially many external SDK installations registered with the IDE itself.   This will decouple Composer from a particular version of the SDK and allow for managed separate upgrades of Composer and the DFS SDK giving developers more freedom to upgrade Composer but stay developing against their current DFS SDK or vice versa.  The majority of the code is in place for this in the dfs plug-ins.  We just need to enhance the UI to support registering DFS SDKs and associating them with (DFS) projects.  Oh and we need to promote the DFS SDK itself as a freely available and downloadable distributable, on EDN 🙂

Conclusion

In this article we’ve taken a look at the Documentum Core Project, a part of Composer we have not looked at before but of some significance.  We have discussed some of its implications and touched upon some of the (hopefully) not too distant enhancements.

As always, questions gratefully received.

Happy Composing!

Creating a DFS Services Client using Documentum Composer

One of my readers left a comment saying that my previous post left him high and dry when it came to creating the client project for testing his service.

So I put this very quick post together to describe it for him and hopefully for others too.

First this to note here is that there are lots of ways to create a DFS Client spanning both Java or .Net.  This is just one way using Documentum Composer.

So where did the previous post end up.  Well we had created a Documentum Project and within that created a DFS service, exported it as a service archive and deployed it onto a suitable application server. 

So what next?

Create yourself another Documentum Project (File->New->Other->Documentum Project).

Next create yourself a lib folder and in it import (Import->File System) your services remote jar.  This will have been exported at the same time as your service archive.  If you called your service archive samples for example then you would look for samples-remote.jar.

Once imported add it to your java build path (<project>->Properties->Java Build Path->Libraries->Add JARs…  Navigate to your remote jar within your client project and select it.  You should end up with the equivalent of this:-

image

You also want to make sure you have the right DFS client class path configured.  So whilst you are on this dialog.  Select the “DFS Services Library” build path entry and click Edit…  Select an appropriate library type for your scenario.  I’ve chosen “DFS Remote Client Library with UCF”:-

image

Click Finish.

Finally you need to write some code to call your service.  There are lots of contexts that your code could execute within; a Main class, a java application, an eclipse application, lots of others too.  I’m a big fan of test-driven development so I like to create Junit tests as these can be automated later and become part of the applications test suite.   Let’s create one of these (File->New->Other->Java->Junit->Junit Test Case):-

image

Give your Junit test a Package and a Name and make sure you’ve chosen to create a constructor stub.  Don’t worry about the super class warning – just click Finish and you should see this dialog:-

image

Obviously we do want to add Junit 3 library to the project’s class path so choose to perform that action.  Click Ok. 

Composer should do that for you and open your newly created Junit test case in the java editor.  We now need to add our service orchestration code.   Add a method called test with the following signature shown:-

image

And then add your orchestration code to that method.  If your not sure how to get started with this then a great place to start here is the client code that ships with the DFS SDK.  Cut and paste in a sample remote client and re-purpose it for your needs by changing the service related calls to use your service.  I’m actually using the DFS Hello World sample for this article.  As you are adding code, especially if you cut and paste a sample, you’ll end up with a bunch of problem markers:-

image

Because we already configured the project with a DFS class path you can hover over the problem marker in the sidebar and use the quick fix to add the necessary DFS imports.

A good tip here is to note that the moduleName and contextRoot (in the code sample above) must match the equivalent settings that you configured for your services project and that are also specified in the Export Service Archive dialog:-

image

That is pretty much it.  Make sure your services are deployed and that the server is running and the wsdl for each service is available.

To run the Junit test right-click on it in the project explorer and select Run As->Junit Test.  This will execute your test case using the Junit test framework and display the results.

If you want to debug the code then set a break point in the code and select Debug As->Junit Test instead.

Alternative

Now it is important to note that DFS supports two modes of invocation; remote – as we outlined above.  But it also supports local invocation too. 

This provides us with an alternative way to test our service code with less fixture than the remote alternative; i.e. without having to deploy it to a server.  

First we also need to add some of the services’ resources to the services project’s class path so that the DFS runtime can find them.  The Ear is configured in such a way that these same resources are available on the class path when the Ear is deployed on an app server.  We are just mimicking this configuration. 

So right-click on your services project and select Properties ->Java Build Path->Libraries.  Click Add Class Folder.  Navigate down your services project.  Expand the Web Services folder and the bin folder.  And check the gen-rsc folder.  Click Ok.   You should see this folder added t0 the build path:-

image

That’s the services project configured.

Now create another Documentum client project.

This time round make sure you add your service project to the Java Build Path (<client project>/Properties->Java Build Path->Projects):-

image

As before edit the type of DFS Library Path but select “DFS Local Library”.

There is no need to add the remote jar as we will pick up the service classes directly from the project.

As before add the Junit Test Case.  If you get any compilation errors then quick fix them.  Note, that when you add your services code you must call getLocalService instead of getRemoteService AND you don’t need to register the service context with the ContextFactory ahead of time.  So your service invocation code simply becomes:-

ContextFactory contextFactory = ContextFactory.getInstance();
IServiceContext context = contextFactory.newContext();

ServiceFactory serviceFactory = new ServiceFactory();  
IHelloWorldService service = serviceFactory.getLocalService(
               IHelloWorldService.class, context);

And that’s it.  You are ready to go.  Right-click on the Junit class in the project explorer and select Run As (or Debug As)->Junit Test.

Conclusions

So we’ve walked through creating a remote DFS Consumer and looked at an alternative for creating a local DFS consumers that leverages DFS’s local invocation facility.

As always I await your feedback.  In the meantime…

Happy Composing!