Categories
Community XPages

Tim Explains: SSJS Object Persistence

Below is an email thread between Serdar Basegmez, Tim Tripcony and myself. Serdar reminded me of it and re-sent it to me so I could share.  It’s a little dated but I’m reposting this because the quality of the information should be shared.

This is pretty much the marks the beginning of when I came to grips and realized that if I wanted to use XPages in a similar manner to how I developed for the Notes Client I needed to look past ServerSide JavaScript and focus on Java.  I liked building my applications around Custom Classes in LotusScript.  This email explains how that is no longer possible to do in XPages and that if I wanted to continue my client development “comfort zone” I needed to use Java for that.

Now this realization did not happen over night for me.  I believe this email came in right after my famous battle with Phil Riand. Where I said I’ll never use Java and SSJS should be made better and the “bug” should be fixed.  And he said it’s not really a “bug” and to use Java. we went back and forth a bit and I ended up losing royally and now I write with Java.  haha

 

So anyway here’s Serdar’s original question:

From:        Serdar Basegmez 
To:       David Leedy, Tim Tripcony

Date:        
06/23/2011 10:49 AM
Subject:        
About persistence of SSJS objects…



Hello champs and congrats to both…

Tim, I have also added you, we talked with David before and I know you are dealing with this problem as well.

I don’t know the last status but after I talked with David, I read Tim’s answer again and done lots of tests. So Tim is right, there is no way to put JS functions into scope variables with disk persistence.

Yesterday, I figured out a different problem and it was like a similar situation to me. So I wanted to share it.

I have a SSJS library added to my cc. I am using a global object like that…

var xyz={
someProperty:”some value”,

aDoc:null,

aDB: null,

init: function(config) {
this.aDB=session.getDatabase(“someserver”, config.dbName, false);

this.aDoc=aDB.getDocumentByUNID(config.someId);

this.someProperty=”some other value”;

},

aMethod: function(someValue) {
someView=this.aDB.getView(“A_View”);

//do stuff with view

this.aDoc.replaceItemValue(“OtherField”, someValue);

},

anotherMethod: function() {
return this.aDoc.getItemValueString(“OtherField”) + ” — “+this.someProperty;

}

}

Now, I am calling xyz.init({ … }) at the beforePageLoad event of the custom control. Everything is fine. I am using xyz.anotherMethod() in a computed field, I get some results. That’s fine.

But when I run a server side event in a button with xyz.aMethod(“ZZZ”) it gives an error. Because aDB object becomes ‘faulty’. It’s not null, I can print and dump it to see it’s still a NotesDatabase object but it’s properties and methods cannot be accessed for anything after cycling between partial/full updates.

I first blamed if there is a scope problem, like insanely it may not support global variables ?

Then I did some other tests and see it preserves strings, integers, dates or any other standart objects. But other objects like Notes documents, views or databases are going to be corrupted in this cycle.

What do you think? Is it normal?

————————————————————–
Serdar Basegmez

And here’s Tim’s Response:

 

From:        Tim Tripcony 
To:        Serdar Basegmez 
Cc:        David Leedy
Date:        23.06.2011 20:46
Subject:        Re: About persistence of SSJS objec

Serdar, your issue is slightly different from the one that David has been experiencing. All SSJS code is interpreted as a series of Java objects at runtime, and as such, uses the lotus.domino Java API. One limitation of this API is that each Java object must maintain an internal pointer to the actual C object it corresponds to. Unlike Java, C does not have automatic garbage collection, so if one of these Java objects is destroyed without cleaning up the C pointer, the server runs out of memory very rapidly. As a precaution, therefore, the Java objects in this API have a “recycle” process, and the entire session is recycled after every transaction, if you’re using the default settings in 8.5.2.

Rather than storing the actual database or document, best practice is to store “primitive” data about each: for example, storing the server name and filepath of the database, the UNID of the document, etc. By storing this information, it becomes easy to reacquire the object handles as needed, without actually storing the object handles themselves, which become toxic after each request.


However… even if you correct this issue, it is likely that you will encounter the same behavior that David has mentioned. The problem is that SSJS functions (or objects that contain functions or closure pointers to functions) cannot be stored in any scope higher than the requestScope.


This behavior is a regression introduced in 8.5.2. IBM feels differently: due to the complexity of making SSJS functions serializable, they chose not to do so when they added the serialization process to Domino in 8.5.2. Their stance is that storing SSJS functions is not best practice.


As far as I’m concerned, however, it’s a bug. This worked in 8.5.1; it doesn’t work in 8.5.2. That’s the very definition of a regression bug: something used to work, now it doesn’t. That’s a regression.


Strictly speaking, there is an option available to allow it to continue to work: adjust the application properties to “keep pages in memory” (in the Performance section of the XPages tab). This was the only option in 8.5.1, as serialization had not yet been implemented, so this essentially tells the memory management to revert back to the 8.5.1 behavior. This is only acceptable if your application does not need the additional scalability offered by 8.5.2… and, because the scalability gains offered by serialization are significant, “keep pages in memory” is not the default option, so you will need to manually change this (either within every application, or in the xsp.properties file on the server) in order to store SSJS functions in the viewScope.


Nathan has instructed me to create a demo application that explicitly illustrates why IBM needs to remove this limitation. I will send it to both of you by 6 PM EDT.


In case you’re curious, here’s a (somewhat) quick explanation of why serialization enhances application scalability:


According to the JSF specification (which is the basis for the majority of the functionality in XPages), the process of serving a page to a browser (or other consumer) consists of a 6-phase “lifecycle”. The first of these phases is referred to as “restore view”. All this really means is that, when the page is accessed, the server constructs an in-memory tree structure, conceptually similar to a DOM (though the implementation is quite different). This structure facilitates all the subsequent work the server has to perform to determine what markup to transmit.


Once the page markup has been transmitted, the server doesn’t need to be aware of that tree structure anymore… unless events are fired. If a user clicks a button on the page, the server now needs to “know” not only what that button is supposed to do, but how it fits into the overall component tree. If it’s trying to interact with other components (e.g. getting / setting field values), performing operations against defined data sources, and so on, Domino needs to know the state of the impacted objects. Similarly, even without directly manipulating other controls, an event can have indirect impacts. A fairly simple example is a div whose visibility is bound to a viewScope variable and a button that toggles the value of that variable. In order to evaluate the updated state of the component tree following any events, the server must first be aware of the previous state of the component tree. Hence, when an event occurs, the “restore view” phase does not construct a new in-memory tree structure, it “restores” the tree based on the previous state of each component.


In 8.5.1, this was easy: the entire tree structure simply stayed in memory. So when the server received a request indicative of an event against an existing page instance, it didn’t have to rebuild the tree… it just accessed an in-memory pointer to the tree it was already storing. This is why XPages tend to feel so fast in comparison to traditional Domino applications: when interacting with a form, page, view, etc., Domino has more work to do when responding to equivalent events to decide what HTML to send… with an XPage, most of the work was done on the initial page load, so partial refresh events tend to be lightning fast. The downside is that this can consume a lot of memory over time. Domino still does periodic cleanup to keep things from spiralling out of control, but if this is the only behavior available, XPage applications cannot massively scale without massive hardware.


In 8.5.2, they changed the behavior of the final phase of the lifecycle: the “render response” phase. As soon as the response has been fully transmitted, Domino checks the application settings to determine whether serialization is needed, and responds one of three ways:


1. The setting is “keep pages in memory”: no action is necessary. The component tree is left in memory, and the page lifecycle terminates.

2. The setting is “keep pages on disk”: the server serializes the entire tree structure and removes all memory pointers to the components, prior to terminating the lifecycle.

3. The setting is “keep only the current page in memory”: this gets a little bit complicated. As far as I’ve been able to tell in my own testing, the “render response” phase behaves the same as the first setting, but then the “restore view” phase behaves differently: prior to restoring the component tree, it determines whether the page being accessed is the one already in memory (or the first of a new session)… if so, the phase proceeds as normal. If, on the other hand, it already has one page in memory for the current user, but the page being accessed is different, it serializes the other in-memory tree and removes it from memory, then loads the new page.


The last setting is a great compromise: when you interact with a page you already have open, any events you trigger can be performed rapidly, because the component tree for that page is stored in memory. But as soon as you navigate to a different page (or reload the current page), theoretically you don’t need that page ever again… you may come back to that page later in the URL sense of a page, but not in the user interaction state sense. So the server keeps its memory fairly clean, but in-page interaction is fast.


The one exception to this scenario is if you’re opening multiple tabs / windows; if one page does a window.open() to launch a page (even the same URL) in a separate browser instance, as far as the browser is concerned, you now have two pages open. But Domino only keeps one at a time, so the page corresponding to the new window is now in memory, so it serializes the previous page. But if you go back to the first window and trigger an event, Domino now has to serialize the “new” page before loading the “old” page in memory. This can reduce some of the performance benefits of keeping the current page in memory, but the scalability impact remains the same, because Domino only ever holds one page per user session in memory.


Anyway, hope this explains what you’re experiencing. I’ll send you the demo database soon.


Tim Tripcony

 

And finally, Serdar’s followup:

From:        Serdar Basegmez
To:        Tim Tripcony
Cc:        David Leedy
Date:        23.06.2011 21:21
Subject:        Re: About persistence of SSJS objects…


Tim,

It’s explaining very well. The second section I have to re-read a couple of times but I totally understood the toxicization of my objects 🙂

One thing came up to my mind while reading.

Domino ‘objects’ we are using  in SSJS are based on C objects behind. This is the same for Java API I presume because I had lots of troubles on recycling issues in the past when I were working with WAS integration over DIIOP.

However, when we develop beans with Java, we can deal with Domino classes through Java API but we don’t mess with any recycling issues. Isn’t it a potential trouble for beans?

Beans are also stateful. Even they don’t get into the XSP cycles, as long as the XSP server up, they will begin to consume more and more memory. Am I wrong? In addition, I think, JavaBeans have to be serializable in java servers but our beans cannot be serialized because Domino classes will not be serialized.

About the serialization of javascript objects, I can totally guess what they might have said: “This is an optional stuff blah blah, you may disable it blah blah…”

I totally agree with you. This is a clear regression. They are creating a new mechanism to improve performance however they are excluding an important feature.

Thanks for the explanation…

Have a nice day…

————————————————————–
Serdar Basegmez