The Content Basket component uses the ZIP Rendition Management to package the contents of the basket for download as a ZIP file. By default there are two limitations imposed on your download. It must be a count of five hundred (500) pieces of content or less and it must be a total of five hundred (500) megabytes or less. The two settings that control this are actually part of ZIP Rendition Management and not Content Basket.
This variable represents the max size in megabytes (default is 500 megabytes). From the internal notes: Note, it is unlikely that a browser + web server will successfully download a file over 2 gigabytes over 2 gigabytes and you need 8 byte integers in the Content-Length HTTP header) in all scenarios so the effective maximum cap on this value is 2000 megabytes.
This is the max number of items available to package in the bundle.
Get it on the downloads page.
I am very happy to introduce the Alphabetize Menus MARK III (in honor of my love of the Ironman movies). This version introduces compatibility with Safari and Chrome. Thanks to Shane D. for pointing this issue out and finally getting me to get this long overdue bug resolved as well as some of the testing.
I conducted a presentation at Collaborate 2010 today showing some tips and tricks for installing Oracle Universal Content Management, formerly known as Stellent, on an EC2 instance.
Here are a few references:
One of the participants asked about license ramifications and this might help understand that arena a little:
The new (as of around March 15th) Core Update Bundle (build 55) includes an updated Folders_g component. If you have tried to install the Folders_g component that came with the update bundle released on or around December 30th of last year you may have ran into issues with the install not working.
The issue revolved around new installations of the folders component not creating tables, etc. in the database. If you had previously installed folders and you simply upgraded you may have been fine. This only seemed to affect new installs.
Either way, grab the newest update bundle and hopefully we can put these issues behind us. So far, from what I’ve tried, it has worked much better.
When you are developing and testing workflows in UCM the Update Event of a step in a workflow can be your worst enemy. The update event is fired off roughly every sixty minutes in a standard configuration of UCM. Any IdocScript you create in your Update event of your step then might have to wait as much as sixty minutes to execute. This makes testing these scripts difficult. Sure, in some cases you can use the built in test harness. However, this little configuration will be a big boost:
Adding this configuration variable to you <install>/config/config.cfg or through the Admin Server under General Configuration and restarting your content server will give you a big development/test boost. This reduced the period from sixty minutes to five minutes.
Happy workflow debugging!
If you want to hide the alternate file on the check-in page you can add the following configuration to your <install>/config/config.cfg file. You can also add this to the General Configuration Variables in the Admin Server. Both of these methods will require a restart. This will hide alternate file globally. And what is the setting? It looks like this:
What if you would like to do this for a certain profile, but not globally? No problem. Simply select one of your rules for your profile. Enable the activation conditions and add this setting as a side effect:
I wanted to talk a little about how we take advantage of Amazon Web Services, especially the Elastic Compute Cloud, here at Redstone Content Solutions. This is not marketing drivel. The goal is to describe how you can benefit from the cloud for your Oracle endeavors by describing how we actually do just that ourselves. Sometimes it helps to just hear about how others are using something.
We use the Amazon Elastic Compute Cloud, sometimes referred to as Amazon EC2, for a variety of purposes. The topics for this post will be the use of EC2 for Development and Training. I will release a post in the future with additional thoughts about using EC2 to host your Production environment.
For those who are attending COLLABORATE10 at Las Vegas’ Mandalay Bay Hotel & Convention Center April 18-22, I will be giving a presentation on this subject. The session, entitled Build your own UCM Stellent Instance in Amazon EC2, will be held on Thursday, April 22nd, 2010 at 8:30 a.m. in Room 2. The session ID is 128. Session dates and times are subject to change, so stay tuned!
We actually do use EC2 to host development and training environments. I interact with EC2 environments on a daily basis. The primary thing you need to take advantage of EC2 for these purposes is a reliable internet connection. Note: I will be discussing mostly Oracle Content Management – based environments but we also use them for Web Center Framework and SOA Suite.
Originally when we started using EC2, machines had to be kept running. If you restarted an instance you lost your “state” if you did not re-bundle the instance and persist it as an Amazon Machine Instance (AMI). There were a few tricks we could use to avoid this to a certain extent that revolved around attached storage. Additionally, images we created ourselves had an image size cap. This proved to make things difficult as we tried to construct base images within the 10 gigabyte range.
With some recent announcements in December by Amazon the above restrictions are now a thing of the past. We can launch instances, change data, shut down and launch again with no loss of state or data. It actually acts like a real piece of hardware now. With this new functionality, we can boot images directly off of Elastic Block Storage (also known as EBS). This means when we shut our instance down, the resources required to run the instance are not reclaimed. The resources (namely disk) are kept in our EBS volumes and their data persists across shutdowns. Now when we launch our instances the resources are already allotted and immediately available. Hence, our boot times for launching instances are much faster. If you’ve ever tried to launch an EC2 instance you know why I am excited about this.
EC2 also works with Amazon Simple Storage Service (S3) which is different from EBS. Think of EBS as blocks of space you can attach as volumes to machines. Amazon S3 acts like your corporate SAN where you can store all kinds of information. Whenever you transfer something from your own machine or elsewhere in the world to an EC2 instance or S3 Amazon will charge you. However, Amazon does not charge to move data from S3 to any of your EC2 instances. So, we store installers, patch sets, etc., in S3 and then we copy those to our EC2 instances. You just have to get your content into the Amazon cloud and then you can move it around within the cloud for free.
You can work with a variety of operating systems in EC2. You can use Windows Server 2003 or 2008 and many flavors of Linux. There is even a process you can go through to convert VMWare Workstation files to an Amazon Machine Instance that you can upload and run in the cloud. Since we work exclusively with Oracle products, we use Oracle Enterprise Linux (OEL) extensively. There are several AMI instances available, pre-built, from Oracle for Oracle Enterprise Linux and Oracle Database 11g that might serve as a good starting point. Or, you can really get into it and start using a Just Enough Operating System (JeOS) version of OEL.
We can setup an instance of OEL, Oracle Database 11g R2, Oracle UCM 10gR3 and get everything configured just right. Then we can spawn as many “instances” of this as we want for training, testing or demonstration purposes. We can choose to accept a single processor box with 2 gigabytes of RAM, or we can throw the “big iron” on and fire up with 8 processors and 16 gigabytes of RAM or pretty much anything in between.
Think about this scenario. You want to try out Digital Asset Management on Content Server. Specifically Video Manager. You can acquire an EC2 instance with all the power you need to run Flip Factory within minutes. Flip Factory is a neat piece of software but the processing requirements to really use it are pretty steep. Most development groups I know do not have that kind of horse power just lying around waiting to be used. With EC2, you can have it running by lunch time. We use this kind of quick hardware acquisition to provision testing environments or “component labs” frequently.
Finally, the nice thing about all this is accessibility. We can quickly and easily open this up for a client or prospect to view a test instance. We can even spawn a separate instance of the original for the client to “play around on”. But what if they do something we hadn’t accounted for? Drop that instance and re-spawn a new one and they’re back in action fifteen-twenty minutes later.
In the future, I am going to try to cover more specific details about the actual setup, the problems we encounter and how to solve them. I will also detail a handy array of tools we use to work with Amazon Web Services (AWS). Some of this will be on display if you see my presentation in Vegas!
Just food for thought. Think about it some. The opportunities are endless.
There has been some good discussion going on over on the Oracle ECM Forums regarding Accounts. It started out as Accounts and Access Control Lists but it has morphed a little into some discussion about how to use Accounts in general and some possible structures people are using to layout their account hierarchies.
Check out these discussions about accounts in the Oracle ECM Forums:
One of the easiest ways to see what might be going wrong, or right, in Content Server is the use of trace sections on the System Audit page. Trace sections allow you a great variety of control over what kind of information shows up in the server output.
See this former blog entry for more detailed information about tracing:
Content Server Tracing and Creating Your Own Custom Trace Sections
Refreshing the Server Output to show you what is going on can get annoying though. If you add this entry to General Configuration or config.cfg and restart you can get server output to be written to disk:
To find the log on the disk look under the directory that is appropriate for your Operating System.
Windows: <install dir>/bin/IdcServerNT.log
Linux: <install dir>/etc/log
Now that you have found the log, it would be easier to keep an eye on what’s going on by using something like the tail command in Linux/UNIX. The tail command, with the -f switch will continuously show the data being added to the log file. In Linux you simply use this command:
tail -f /etc/log
There are several applications available that add similar functionality for windows:
The other day I was using an Oracle database tablespace supplied by a dba for a development environment and when I went to rebuild the full text index the Repository Manager greeted me with this nifty little error:
ORA-01658: unable to create initial extent for segment in tablespace
Ok, this is a development box, lots of space. I thought this was all setup to expand on need, but let's check it out. First, we need to go to the database via something like Toad, SQL Developer or JDeveloper and execute a Query similar to this one:
select file_name, bytes, autoextensible, maxbytes from dba_data_files where tablespace_name='DEVWCM_SYSTEM';
You're tablespace name will vary, BUT REMEMBER: the value you supply for tablespace name is case sensitive
Running this query I find out that autoextend is NOT turned on, and further exploration yields a maxed out datafile. Fine, let's get autoextend turned ON. Here's two samples:
alter database datafile
alter database datafile 'C:\Oracle\app\oradata\orcl\DEVWCM_SYSTEM.DBF'
autoextend on next 100m maxsize 2000m;
In the first case we're going to simply turn on autoextend and let it ride. In case number two you can see some extra instructions including how much to extend and a limit.
And did this take care of the problem? Yep. Sweet, a fresh, clean, working index.