Tibi Kiss

Ranch Hand
+ Follow
since Jun 11, 2009
Merit badge: grant badges
For More
Cows and Likes
Total received
In last 30 days
Total given
Total received
Received in last 30 days
Total given
Given in last 30 days
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Tibi Kiss

Thank you!

Since 4 weeks, I was in a hurry to migrate a classic webapp style site to Liferay. Now this new site is online.
Apparently looks very similar with the old one. There are about 14 portlets, some of them are not visible from outside, because they are infrastructure and license management tools. The most exciting thing is that in behind it can start Hadoop cluster on amazon ec2 and which does bioinformatic tasks using those tools which are downloadable from the same site.

This was my first liferay integration project. The real Liferay features will just be added in the following weeks or month, such as there will be announced challenge with prize, the community of enrolled has to be handled. Some customer support and our small organization has to be organized, because we have a few people in several parts of the world. It means that I'm somewhere at the beginning with real liferay system develpment.
10 years ago
In fact user import is extremly easy.

I also get into problems with password migration. Looks like my old passwords are md5 with salted hash. Unfortunately Liferay does not support this kind of digest.
If I could hook (or with ext env) the com.liferay.portal.security.pwd.PwdEncryptor, or com.liferay.portal.security.pwd.PwdAuthenticator it would be one way to migrate.
10 years ago

Amro Ali wrote:Take a look at this blog for details on using expando. I know it's written in velocity, but velocity leverages java objects and methods.

If you require more detailed examples in java just reply again to the post and I'll post some.

Thanks for the link. It will be the today's task :-)
10 years ago
Thank you Ali for the detailed description.

Amro Ali wrote:
As for your follow up post, I suggest you'd have the default to enable the creation of the private/public pages. it will be a pain to recreate for existing users, whilst if you create and not make any links in navigation menu or dock bar, it would be much easier.

That is the reason which I would figure out completely opposite. So thank for pointing out of this.

How the big picture it looks like now (yestarday I passed all the developer test on the new site based on Liferay), the menu on default community is already fully occupied, all the administrative (intranet like) features I'll put inside an organizational community, just because there I have space for more menu items and within that, I'll use multiple menu items with different roles.
10 years ago
Unfortunately I am not using LDAP. I have here a simple database table based authentication, once implemented with spring security.

Let's describe my integration between my old stuffs and the liferay.
Initially I had a database where there is a User table and some foreign keys refers to that User table.
I didn't merged my old tables into liferay database, instead I kept as it is. But I created the most simple integration of the users, such as in my old (code to be migrated in portal env) where the authenticated user is used to lookup the user table, there I introduced a mapping, by first asking from the liferay's themeDisplay the logged in User (if any) and then with that id, I am looking that I have already cloned the user data or not, into the old users table. If not, then I am creating that record, reading also the 3 expando fields form liferay which is necessary to match all the field's requirement in my old users table.

I'm not convinced about this approach, maybe later on I'll introduce a hook, by implementing the user service from liferay.
(My current first aproach also handles user details changes also, but it's a bit too late when it is executed. It's fine for the logged in user itself, but if some administrative tasks uses the old users table, theoretically it may get caught in an inconsistent state, not reflecting the latest changes of the user once executed in liferay. In practice, this is happening if the user immediately logged out after changing its details.)
10 years ago

Amro Ali wrote:please take a look at the getLayout methods in the com.liferay.portal.service.LayoutLocalServiceUtil this will return the layout and from it you can retrieve the plid

Thank you, Ali.
10 years ago
Today I would like to solve the migration of my old non-liferay user database into Liferay 6.1.
I have all the required fields, plus I know that I need 3 more expando fields.

I have seen codes which can programatically create users, but I don't know how to add those 3 expando fields too when the users are just created.
10 years ago
When you create a portlet url

you need the targetPageLayoutId (the plid). In case the url is created on the same page, the plid you can obtain easily from ThemeDisplay.
The problem appears, when you want cross page portlet url link.

A few days ago, I created such a portlet and I created a tiny portlet configuration screen where I can manually fill in the plid number, so that this special portlet will read the value from this manual configuration. But the plid I am finding by logging out the plid from within the portlet to whom I would like to link.
Ideal would be that I would have created the portlet configuration page which would search for pages containing the specified portletname, or just popping up a list of all page layouts (just like the menu structure in a dropdown list) then I would be able to find the plid of the selected page.

Somebody has such a concrete code snippet, which shows how to query the plid?
10 years ago
Those of you which has access to larger deployments, probably has an idea which portlet technologies are used the most.

I know that Liferay integrates many of the portlet technologies, but such an information to estimate which are the most promissing or most used technologies, or which are the worst case scenarios, I would like to read here.

In these days I migrated 11 portlets from classic webapp environment into Liferay portlets. I just kept the used technologies, in my situation I had 3 portlets written in spring-mvc and the rest in spring-webflow with the same mvc view technology.
In the future, eventually I would consider other portlet technologies.
10 years ago
Just another idea. If I would like to go online as soon as possible, which approach is more viable:
- at first just disable private and public pages for every user, then later when will be necessary, I can enable it
- should I enable now, then later disable. What will happen in this case, those who already have their pages, they will still be able to use that just new users or pages cannot be created?

Basically I am interested in what is happening at these transitions.
10 years ago

For a company portal I would like to setup with:
- public users which access the pages of the website, they would not have public pages and may have (not decided yet) private pages but unmodifiable
- company users will have at least one group where will be the intranet, those users will not have public pages and their private pages is modifiable
- a category of company users (or groups) which are allowed to have public pages and modifiable private pages
- at least one customer group, they would not have public pages and their private pages has to be unmodifiable. The customer related group stands for customer support and there will be a public page where many customers can share ideas and will see each other's comments, and there will be a private area where concrete problems and private communication will take place.

While looking into the portal.properties I see the following properties

I expect that the described scenario cannot be easily configured with just these properties. It means that these properties cannot be controlled by role bases?
The group which may have public pages, has to be power users may solve the public pages problem.
But controlling the private page to be editable or not apparently cannot be differentiated. Or do we have a role based control possiblity?

What about your recommendation. From user's point of view, makes sense to have modifiable private pages in case of a company website which will collaborate with scientists (bioinformatics and genetics field)?
10 years ago
Thank you, Chuck.

The adjusting of the size of cluster would not be done with the intent to pull out an instance while running some tasks. Instead the main intent it would be to increase the number of instances while already running the Hadoop cluster, at some decision point (in my hand also). And some kind of gracefull shutdowns of some nodes, after the huge amount of load is getting back to a much lower one. In such case a mechanism which will inform the Hadoop cluster manager to not start new tasks on some nodes and when the current task are done, to notify us, or self shutdown the instance in cause.

Knowing a distribution of the loads of our current jobs, I calculated that a resonable amount of resources (costs) could be optimized.

Because of multi job session, I have a situation when for one client I start EC2 several (50) instances, then later on it comes a very different job, which does not requires so much instances. If the biggest job finished, the smaller job will scale up so much that the time required for the remaining task will finish proportiately soon. But I know that not short enough to not pass the accounting hour limit and I get another 50 x instance hour on my bill, while I know that an amount of 10 instance could finish the job. The difference of 40 x instancehour price is lost. Frankly to say, that is not too much to be worst considering gracefull shutdown. At least for graceful shutdown not.

The possibility of enlarging the size of the cluster is more stringent, because while processing small mapreduce jobs it arrives on the pipeline a huge job, immediately I should start a completely new hadoop cluster with many instances, while the smaller jobs from the original scheduling could be finished in the spare time of the huger cluster (consider the losts when the huge task is ending). Also managing multiple clusters + hdfs maybe complicates too much the problem.

At the moment I feel that dynamically starting up new nodes it would be significant. Graceful shutdown is not so stringent, because scaling out factor of map reduce algorithms.
12 years ago
Some developmental psychology researcher even 30 years ago were mentioned the fact that a kind of map and reduce (very similar to MapReduce) decomposition take place naturally while the human mind is thinking. (of course this is a short summary only). Therefore childrens learning to think in a more map-reduce way to solve problems, are getting much more creative.
If this is so, then my conclusion is that we have to expect a huge move forward to the success of MapReduce itself... because it's a natural approach to many problems.
12 years ago