The Most Important Component of Evolution is Death

CS140, 01/20/2012
From a lecture by Professor John Ousterhout.

Today’s thought for the weekend is: the most important component of evolution is death.  So I want to address that first at a biological level and then let’s pop up and talk about it at a societal level, Silicon Valley, and computer software.  So, first, from an underlying biological standpoint, it’s sort of fundamental that for some reason it’s really hard for an existing organism to change in fundamental ways.  How many of you have been able to grow a third leg?  Most people can’t even change their mind let alone change something fundamental about themselves.

People try.  You make your hair look a different color, but it’s really the same color underneath.  In fact you have this whole thing called your autoimmune system whose goal is basically to prevent change.  You’ve got these white blood cells running around looking for anything that looks different or the slightest bit unfamiliar and as soon as they find it they kill it.  So it’s very hard for any organism to change itself, but when we make new organisms it’s actually fairly easy to make them different from the old ones.  So for example gene mutations seem to happen pretty commonly.  They can be good or bad, but they do change the system.  Or, with sexual reproduction, it’s even easier because you take genes from two parents and you mix and match them and who knows you’re going to end up with as a result.

So the bottom line is it’s a lot easier to build a new organism than it is to change an existing one.  And, in order for that to work, you have to get rid of all the existing ones.  So death is really fundamental.  If it wasn’t for death there’d be no way to bring in new organisms and create change.

I would argue this same concept applies at a societal level.  In fact, if you look at social structures, any structure that’s been around a really long time, it’s almost certainly out of date.  Because, they just can’t change.  Human organizations, companies, political systems, religions, they all have tremendous difficulty changing.

So, let me talk about companies in particular.  We’re hearing these days about various companies in trouble.  Is Yahoo going to make it?  And Kodak filing for Chapter 11.  People seem to think: those guys must have been bozos.  They blew it.  How could you fumble the internet when you’re Yahoo?

My opinion is this is just the circle of life.  That’s all.  That fundamentally companies can’t change.  You come in with a particular technology, something you do very well, but if the underlying technology changes, companies can’t adapt.  So they actually need to die.

I view this as a good thing.  This is the way we clear out the old and make room for the new.  And in Silicon Valley everyone kind of accepts that.  The death of a company is not a big deal.  In fact, what’s interesting is that the death of the company isn’t necessarily bad for the people at all.  They just go on to the next company.

And I was talking to a venture capitalist once and she told me she actually liked funding entrepreneurs who had been in failed startups because they were really hungry yet still had experience. People in successful startups weren’t as hungry and didn’t succeed as often when they got funded.  So death is actually a good thing in Silicon Valley.

Now let’s talk about computer software.  This is kind of ironic because software is a very malleable medium, very easy to change.  Much easier to change than most things.  And I actually consider that to be a problem because, in fact, people don’t have to throw it away and start again.

Software lives on and on and on.  You know we’re still working on versions of Windows 20 years old right now.  And as a result the software gets messier and kludgier and harder and harder to change.  And yet people keep struggling.  They won’t throw it away and start again.

And so what’s funny is that this incredibly malleable medium gets to a point where we can’t make fundamental changes in it because people aren’t willing to throw it away and start again.  I sometimes think the world would be a better place if somehow there could be a time limit on software where it expired.  You had to throw it away and start again.

So this is actually one of the things I like about California and Silicon Valley.  It’s that we have a culture where people like change and aren’t afraid of that.  And we’re not afraid of the death of an idea or a company because it means that something even new and better is coming along next.

So that’s my thought for the weekend…

A little bit of slope makes up for a lot of y-intercept

CS140, 01/13/2012
From a lecture by Professor John Ousterhout.

Here’s today’s thought for the weekend. A little bit of slope makes up for a lot of Y-intercept..

So at a mathematical level this is an obvious truism. You know if you have two lines, the red line and the blue line and the red line has a lower Y-intercept but a greater slope then eventually the red line will cross the blue line.

And if the Y-axis is something good, depending on your definition of something good, then I think most people would pick the red trajectory over the blue trajectory (..unless you think you’re going to die before you get to the crossing point).

 

So in a mathematical sense it’s kind of obvious. But I didn’t really mean in a mathematical sense, I think this is a pretty good guideline for life also. What I mean is that how fast you learn is a lot more important than how much you know to begin with. So in general I say that people emphasize too much how much they know and not how fast they’re learning.

That’s good news for all of you people because you’re in Stanford and that means you learn really, really fast. This is a great advantage for you. Now let me give you some examples. The first example is: you shouldn’t be afraid to try new things even if you’re completely clueless about the area you’re going into. No need to be afraid about that. As long as you learn fast you’ll catch up and you’ll be fine.

For example I often hear conversations the first week of class where somebody will be bemoaning, “Oh so-and-so knows blah-blah-blah, how am I ever going to catch up to them?” Well, if you’re one of the people who knows blah-blah-blah it’s bad news for you because honestly everyone is going to catch up really quickly. Before you know it that advantage is going to be gone and if you aren’t learning too you’re going to be behind.

Another example is that a lot of people get stuck in ruts in their lives. They realize they’re in the wrong job for them. I have the wrong job or the wrong spouse or whatever…

And they’re afraid to go off and try something new. Often they’re worried, I’m going to really look bad if I go..

I’m kidding about the spouse. But, seriously people will be afraid to try some new thing because they’re worried they’ll look bad or will make a lot of rookie mistakes. But, I say, just go do it and focus on learning.

Let me take the spouse out of the equation for now.

Focus on the job.

Another example is hiring. Before I came back to academia a couple of years ago I was out doing startups. What I noticed is that when people hire they are almost always hire based on experience. They’re looking for somebody’s resume trying to find the person who has already done the job they want them to do three times over. That’s basically hiring based on Y-intercept.

Personally I don’t think that’s a very good way to hire. The people who are doing the same thing over and over again often get burnt out and typically the reason they’re doing the same thing over and over again is they’ve maxed out. They can’t do anything more than that. And, in fact, typically what happens when you level off is you level off slightly above your level of competence. So in fact you’re not actually doing the current job all that well.

So what I would always hire on is based on aptitude, not on experience. You know, is this person ready to do the job? They may never have done it before and have no experience in this area, but are they a smart person who can figure things out? Are they a quick learner? And I’ve found that’s a much better way to get really effective people.

So I think this is a really interesting concept you can apply in a lot of different ways. And the key thing here I think is that slow and steady is great. You don’t have to do anything heroic. You know the difference in slopes doesn’t have to be that great if you just every day think about learning a little bit more and getting a little bit better, lots of small steps, its amazing how quickly you can catch up and become a real expert in the field.

I often ask myself: have I learned one new thing today? Now you guys are younger and, you know, your slope is a little bit higher than mine and so you can learn 2 or 3 or 4 new things a day. But if you just think about your slope and don’t worry about where you start out you’ll end up some place nice.

Ok, that’s my weekend thought.

Change Context Root for xmlpserver on 11g BI Publisher

I have been researching on how to change the context root for xmlpserver on 11g BI Publisher. Since our application could not use xmlpserver as the context root extension, I would love to come up an approach with the deployment of the same application but with a new context root. This topic is an open question. I have worked out one solution explained in the current article. But I still have not figured out the best approach for the question. Let me know if anyone has any idea for the questions I asked in the article.

As far as I have touched this topic, I tried changing the context root on the fly by modifying the xmlpserver deployment configuration as below:

Login to Application console, and click on the Deployments. You should be able see a list of deployments on the weblogic server. Find the one with name bipublisher, expand the deployment, there is a web application named xmlpserver. Click on that, it will navigate to the settings page of xmlpserver showed in the picture below: (Click on Configuration Tab and navigate to the bottom)

 

This Context Root is editable and was previously set to be xmlpserver. So I changed it to newContextRoot and save the configuration. However the settings didn’t get picked up during the server restart and it was not working.

I would expect to access http://domain-name:9704/newContextRoot  instead of http://domain-name:9704/xmlpserver

  Anyone knows why this straightforward approach is not functioning as expected?

My working approach:

Here is what I do for the rest of the steps. I will firstly undeploy the current bipublisher application in enterprise manager and then duplicate a copy of existing xmlpserver.ear with the new name btmreport.ear. Later I modified a few files in btmreport.ear and deploy btmreport.ear with the same name bipublisher under the same target. (This involves a few tricks I found out during the trials and errors. I would explain them in the corresponding steps)

 Step 1. Undeploy the current bipublisher application.

 Access enterprise manger as below. By default you could access the link: http://Domain-Name:7001/em

Now on the navigation panel find bipublisher and click on Undeploy as below.

Continue the undeployment.

If you run into the same error with me as below, it means the configuration of bifoundation has been locked and you need to go to applicaiton console to release the configuration.

How to release the configuration? Simple! Go to application console. By default you access the link: http://Domain-Name:7001/console

Once login, you should be able to see that there are changes that need to be taken actions in order to unlock the configure on bifoundation domain. I have been trying out other deployments on console. So I just wanted to undo the changes to release the configuration.

After this change,  I could proceed on undeploying the application back to enterprise manager.

Now we would need to duplicate a copy from xmlpserver.ear with the name btmreport.ear. And modify two files within the ear file.

xmlpserver.ear location: <BI_middleware_home>/Oracle_BI1/bifoundation/jee/xmlpserver.ear

Now duplicate another ear under the same location: <BI_middleware_home>/Oracle_BI1/bifoundation/jee/btmreport.ear

There are two files to be modified in btmreport.ear:

   (MODIFY 1)btmreport.ear\META-INF\application.xml

 Change From Change To
<display-name>xmlpserver</display-name>  <display-name>btmreport</display-name> 
<context-root>xmlpserver</context-root> <context-root>btmreport</context-root>

  (MODIFY 2)btmreport.ear\xmlpserver.war\WEB-INF\weblogic.xml

 Change From Change To
<cookie-path>/xmlpserver</cookie-path> <cookie-path>/btmreport</cookie-path>
<context-root>xmlpserver</context-root> <context-root>btmreport</context-root>

The reason for changes of these two files could be related to the oracle document : Read More on Web Applications from Oracle Doc

Step 2 Deploy the application under the same name bipublisher. Go to enterprise manager and deploy the application as below:

Input the location of the btmreport.ear. (This could be found in the previous step)

Deploy the application under the same target bi_cluster -> bi_server1

 Now we come to the deployment page.

The trick here is to put the name as bipublisher. That is why we have to undeploy bipublisher application firstly. Since it will not allow the same application name being created twice. And the consequence of not using bipulisher as the application deployment name is that you would not be able to have a complete xmlpserver login page(It showed as a blank blue page). I would assume in the BIP software, it is hardcoded somewhere to use bipulisher as the name.

Wait until the deployment finishes successfully, and then validate the new context root(In our case, it should be btmreport)

In order to validate the link, we could go back to application console and take a look at the new deployment details:

Go under the configuration of the deployment btmreport and go to testing to view all the links:

(By default the xmlpserver port should be 9704. In my envrionment, I set it to 9500)

Anyone knows how to keep xmlpserver undeployed but deploy a new btmreport application under bipublisher?

Export a table to csv file and import csv file into database as a table

Today, I would be talking about exporting a table to csv file. This method is especially useful for advanced data manipulation within the csv file and I would also talk about how to import the csv data back into the database as a table. I will be showing Java code snippet to explain importing the csv data and the tradeoff for using sql developer. My environment is based on oracle database and so that all the utility I would be using here is targeting oracle database.

Export a table to csv file:

Method 1. Use oracle sql developer. Please take a look at the video below:

Note: This is good for small table with not much data. However, if you are dealing with a large table, I would recommend you try method 2.

Method 2. Generate the csv file from command line. Here is a sample sql file I used to generate the  csv file.

set pause off
set echo off
set verify off
set heading off
set linesize 5000
set feedback off
set termout off
set term off spool FY11_276.csv
set colsep ','
set pagesize 0
select 'BOOK_MONTH_ID','BOOK_MONTH_CODE','BOOK_MONTH_ORDER','QUARTER_NUMBER','BOOK_MONTH_NUMBER','BOOK_MONTH_NAME','QUARTER_NAME', 'DEL_COL' from dual;
select BOOK_MONTH_ID,BOOK_MONTH_CODE,BOOK_MONTH_ORDER,QUARTER_NUMBER,BOOK_MONTH_NUMBER,BOOK_MONTH_NAME,QUARTER_NAME, null from claire_sample.book_month;

spool off 

Note: You could notice that the highlighted select statement has a last column called: DEL_COL.This column is used to fill in the space after the column separator coma.Otherwise the csv file generated from the command line would have a huge last column filled with spaces.You might need to manually delete the column if you would use oracle sql developer to import the csv file back into database later on.

To be continued…

How to back up a table and import data from dump file

There are two approaches to back up a table:

1: Back up a table under the exsiting schema by creating a new table with tablename_bck nameing convention.

   Simply run the following query under the current schema :

   Creat table tablename_bck as select * from <tablename>;

2: Export the table to a dump file.

    Open a command line tool .Use Oracle utility tool expdp

expdp sample/samplepwd@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log

3: Import the dump file into database.

impdp sample/samplepwd@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=impdpEMP_DEPT.log

More thoughts:

    I would also like to note about the old utility tool exp and imp.

    The syntax is slightly different. Without creating a reusable directory, in exp and imp, you have to specify explicitly of the directory where your dump file is saved to.

exp sample/samplepwd@db10g tables=EMP,DEPT file="C:\EMP_DEPT.dmp" log="C:\impdpEMP_DEPT.log"
imp sample/samplepwd@db10g tables=EMP,DEPT file="C:\EMP_DEPT.dmp" log="C:\impdpEMP_DEPT.log"

  Error: ORA-39143: dump file “M:\SAMPLE.dmp” may be an original export dump file

    

     The above problem happened whenever you try to use the Import Data Pump client (impdp) to import a dumpfile that was created with the original Export client (exp).

Side Topic on Launching Maestro workflow from a link

By defaut the behavior of launching the maestro workflow would ask users to go to task console and select the workflow and then click on Launch button

looks like below: (The link to access task console is : http://siteurl/maestro/taskconsole)

Usually once you install maestro workflow, it will dynamicall create a task console link on the navigation bar.

For example, if I want to lauch my membership application workflow, I would need to get to the task console link and initiate the workflow over here.

1-1

So then once you refresh the page, the workflow you have selected would appear under the task console table and display a status as active meaning the workflow has started.

1-2

Now the question is : how would we initiate a workflow from a direct link on the navigation bar?

So in this case, look at the picture 1-1, on the navigation menu bar, the link “Apply for the membership” should initiate the membership application workflow for us.

To acheive this goal, I created a customized module “launchmaestrolinks”. The module could be at below:

Note: This module is a put-together work from a few gurus and the intitial idea is learned from the drupal forum.

Download the module from here…

After the module is downloaded, here are a few things you need to do to install it!

  1. Copy the zip file to [site_corecode]/sites/all/modules
  2. Unzip the file to install the module.
  3. Enable the module “Launch Maestro Links” 

          Login to your site with administrator priviledges, and then go to Administration->Modules. You should find a simlar screen as below:

         

           Make sure you click the checkbox to enable the module and then save the configuration.

        4.  Once you have enabled the module, go to navigation bar to create and configure the maestrow workflow links.

             Before you create the navigation link for the maestrow workflow, make sure you know about the template number of the maestrow workflow you are referring to.

             For example you could locate the template id number by going to Administration->Structure->Maestro Workflows. In this case, I am going to create the direct link on the navigation bar for my Membership Application Workflow. The template id number is 1.

           

            Now we go to Administration-> Structure->Menus

            Click on the operation “list links” beside Navigation.

            We could simply add a link by clicking on  at the top of the list links.

            Note: the link path should follow the standard as : mastrolinks/<template_id>. So in my example, it would be 1.

           

               Now you could see the link “Apply for the Membership” would show up at the navigation bar and once you click that link.

               It will automatically kick off the workflow: Membership Applicaiton Workflow.

               End of the instructions and have fun with the new module.

               Merry Christmas to everyone!!

Backup exsiting data and load data dump into Database

Export current data dump as a backup.

   Method 1: Use utility exp

  • Login to the Database Server.
  • Start a windows command prompt, click on Start > Run
  • Type in cmd in the dialog box and click on OK.
  • Change directory to the root C:\ >

Type in the following command:

exp <user_toexport>/<user_toexport_password>

file=<directory>\dumpFileName.dmp log=<directory>\logFileName.log owner=<to_user> buffer=10000

Press [Enter]

The backup dump file will be found in the directory you specified.

For example: the following command is to export sample data from SAMPLE database:

exp sample/sample file=C:\sample.dmp log=C:\sample.log owner=sample buffer=100000

   Method 2: using Data Pump expdp

  • Login to the Database Server.
  • Start a windows command prompt, click on Start > Run
  • Type in cmd in the dialog box and click on OK.
  • Type in the following command to connect to the SAMPLE database

SQLPLUS system/<system password>

Press [Enter]

e.g. SQLPLUS system/<system password>

Execute the following commands to create a database directory. This directory must point to a valid directory on the same server as the database:

SQL> CREATE or REPLACE DIRECTORY <directory_name> AS ‘<directory\folder>\‘;

Directory created.

SQL> GRANT READ, WRITE on directory <directory_name> to <user_toexport>/

e.g.  CREATE or REPLACE DIRECTORY imp_dir as ‘D:\db_dump’;

GRANT READ, WRITE on directory imp_dir to bisbtm;

·         Create a folder under directory

·         Type in the following command:

expdp <user_toexport>/<user_toexport_password>

directory=< directory_name> dumpfile=dumpFileNam.dmp 

e.g. expdp sample/sample directory=imp_dir dumpfile=samp.dmp

Press [Enter]

The backup dump file will be found in the directory you specified.

To be continued….