A little bit of slope makes up for a lot of y-intercept

CS140, 01/13/2012
From a lecture by Professor John Ousterhout.

Here’s today’s thought for the weekend. A little bit of slope makes up for a lot of Y-intercept..

So at a mathematical level this is an obvious truism. You know if you have two lines, the red line and the blue line and the red line has a lower Y-intercept but a greater slope then eventually the red line will cross the blue line.

And if the Y-axis is something good, depending on your definition of something good, then I think most people would pick the red trajectory over the blue trajectory (..unless you think you’re going to die before you get to the crossing point).


So in a mathematical sense it’s kind of obvious. But I didn’t really mean in a mathematical sense, I think this is a pretty good guideline for life also. What I mean is that how fast you learn is a lot more important than how much you know to begin with. So in general I say that people emphasize too much how much they know and not how fast they’re learning.

That’s good news for all of you people because you’re in Stanford and that means you learn really, really fast. This is a great advantage for you. Now let me give you some examples. The first example is: you shouldn’t be afraid to try new things even if you’re completely clueless about the area you’re going into. No need to be afraid about that. As long as you learn fast you’ll catch up and you’ll be fine.

For example I often hear conversations the first week of class where somebody will be bemoaning, “Oh so-and-so knows blah-blah-blah, how am I ever going to catch up to them?” Well, if you’re one of the people who knows blah-blah-blah it’s bad news for you because honestly everyone is going to catch up really quickly. Before you know it that advantage is going to be gone and if you aren’t learning too you’re going to be behind.

Another example is that a lot of people get stuck in ruts in their lives. They realize they’re in the wrong job for them. I have the wrong job or the wrong spouse or whatever…

And they’re afraid to go off and try something new. Often they’re worried, I’m going to really look bad if I go..

I’m kidding about the spouse. But, seriously people will be afraid to try some new thing because they’re worried they’ll look bad or will make a lot of rookie mistakes. But, I say, just go do it and focus on learning.

Let me take the spouse out of the equation for now.

Focus on the job.

Another example is hiring. Before I came back to academia a couple of years ago I was out doing startups. What I noticed is that when people hire they are almost always hire based on experience. They’re looking for somebody’s resume trying to find the person who has already done the job they want them to do three times over. That’s basically hiring based on Y-intercept.

Personally I don’t think that’s a very good way to hire. The people who are doing the same thing over and over again often get burnt out and typically the reason they’re doing the same thing over and over again is they’ve maxed out. They can’t do anything more than that. And, in fact, typically what happens when you level off is you level off slightly above your level of competence. So in fact you’re not actually doing the current job all that well.

So what I would always hire on is based on aptitude, not on experience. You know, is this person ready to do the job? They may never have done it before and have no experience in this area, but are they a smart person who can figure things out? Are they a quick learner? And I’ve found that’s a much better way to get really effective people.

So I think this is a really interesting concept you can apply in a lot of different ways. And the key thing here I think is that slow and steady is great. You don’t have to do anything heroic. You know the difference in slopes doesn’t have to be that great if you just every day think about learning a little bit more and getting a little bit better, lots of small steps, its amazing how quickly you can catch up and become a real expert in the field.

I often ask myself: have I learned one new thing today? Now you guys are younger and, you know, your slope is a little bit higher than mine and so you can learn 2 or 3 or 4 new things a day. But if you just think about your slope and don’t worry about where you start out you’ll end up some place nice.

Ok, that’s my weekend thought.

Change Context Root for xmlpserver on 11g BI Publisher

I have been researching on how to change the context root for xmlpserver on 11g BI Publisher. Since our application could not use xmlpserver as the context root extension, I would love to come up an approach with the deployment of the same application but with a new context root. This topic is an open question. I have worked out one solution explained in the current article. But I still have not figured out the best approach for the question. Let me know if anyone has any idea for the questions I asked in the article.

As far as I have touched this topic, I tried changing the context root on the fly by modifying the xmlpserver deployment configuration as below:

Login to Application console, and click on the Deployments. You should be able see a list of deployments on the weblogic server. Find the one with name bipublisher, expand the deployment, there is a web application named xmlpserver. Click on that, it will navigate to the settings page of xmlpserver showed in the picture below: (Click on Configuration Tab and navigate to the bottom)


This Context Root is editable and was previously set to be xmlpserver. So I changed it to newContextRoot and save the configuration. However the settings didn’t get picked up during the server restart and it was not working.

I would expect to access http://domain-name:9704/newContextRoot  instead of http://domain-name:9704/xmlpserver

  Anyone knows why this straightforward approach is not functioning as expected?

My working approach:

Here is what I do for the rest of the steps. I will firstly undeploy the current bipublisher application in enterprise manager and then duplicate a copy of existing xmlpserver.ear with the new name btmreport.ear. Later I modified a few files in btmreport.ear and deploy btmreport.ear with the same name bipublisher under the same target. (This involves a few tricks I found out during the trials and errors. I would explain them in the corresponding steps)

 Step 1. Undeploy the current bipublisher application.

 Access enterprise manger as below. By default you could access the link: http://Domain-Name:7001/em

Now on the navigation panel find bipublisher and click on Undeploy as below.

Continue the undeployment.

If you run into the same error with me as below, it means the configuration of bifoundation has been locked and you need to go to applicaiton console to release the configuration.

How to release the configuration? Simple! Go to application console. By default you access the link: http://Domain-Name:7001/console

Once login, you should be able to see that there are changes that need to be taken actions in order to unlock the configure on bifoundation domain. I have been trying out other deployments on console. So I just wanted to undo the changes to release the configuration.

After this change,  I could proceed on undeploying the application back to enterprise manager.

Now we would need to duplicate a copy from xmlpserver.ear with the name btmreport.ear. And modify two files within the ear file.

xmlpserver.ear location: <BI_middleware_home>/Oracle_BI1/bifoundation/jee/xmlpserver.ear

Now duplicate another ear under the same location: <BI_middleware_home>/Oracle_BI1/bifoundation/jee/btmreport.ear

There are two files to be modified in btmreport.ear:

   (MODIFY 1)btmreport.ear\META-INF\application.xml

 Change From Change To
<display-name>xmlpserver</display-name>  <display-name>btmreport</display-name> 
<context-root>xmlpserver</context-root> <context-root>btmreport</context-root>

  (MODIFY 2)btmreport.ear\xmlpserver.war\WEB-INF\weblogic.xml

 Change From Change To
<cookie-path>/xmlpserver</cookie-path> <cookie-path>/btmreport</cookie-path>
<context-root>xmlpserver</context-root> <context-root>btmreport</context-root>

The reason for changes of these two files could be related to the oracle document : Read More on Web Applications from Oracle Doc

Step 2 Deploy the application under the same name bipublisher. Go to enterprise manager and deploy the application as below:

Input the location of the btmreport.ear. (This could be found in the previous step)

Deploy the application under the same target bi_cluster -> bi_server1

 Now we come to the deployment page.

The trick here is to put the name as bipublisher. That is why we have to undeploy bipublisher application firstly. Since it will not allow the same application name being created twice. And the consequence of not using bipulisher as the application deployment name is that you would not be able to have a complete xmlpserver login page(It showed as a blank blue page). I would assume in the BIP software, it is hardcoded somewhere to use bipulisher as the name.

Wait until the deployment finishes successfully, and then validate the new context root(In our case, it should be btmreport)

In order to validate the link, we could go back to application console and take a look at the new deployment details:

Go under the configuration of the deployment btmreport and go to testing to view all the links:

(By default the xmlpserver port should be 9704. In my envrionment, I set it to 9500)

Anyone knows how to keep xmlpserver undeployed but deploy a new btmreport application under bipublisher?

Export a table to csv file and import csv file into database as a table

Today, I would be talking about exporting a table to csv file. This method is especially useful for advanced data manipulation within the csv file and I would also talk about how to import the csv data back into the database as a table. I will be showing Java code snippet to explain importing the csv data and the tradeoff for using sql developer. My environment is based on oracle database and so that all the utility I would be using here is targeting oracle database.

Export a table to csv file:

Method 1. Use oracle sql developer. Please take a look at the video below:

Note: This is good for small table with not much data. However, if you are dealing with a large table, I would recommend you try method 2.

Method 2. Generate the csv file from command line. Here is a sample sql file I used to generate the  csv file.

set pause off
set echo off
set verify off
set heading off
set linesize 5000
set feedback off
set termout off
set term off spool FY11_276.csv
set colsep ','
set pagesize 0

spool off 

Note: You could notice that the highlighted select statement has a last column called: DEL_COL.This column is used to fill in the space after the column separator coma.Otherwise the csv file generated from the command line would have a huge last column filled with spaces.You might need to manually delete the column if you would use oracle sql developer to import the csv file back into database later on.

To be continued…

How to back up a table and import data from dump file

There are two approaches to back up a table:

1: Back up a table under the exsiting schema by creating a new table with tablename_bck nameing convention.

   Simply run the following query under the current schema :

   Creat table tablename_bck as select * from <tablename>;

2: Export the table to a dump file.

    Open a command line tool .Use Oracle utility tool expdp

expdp sample/samplepwd@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log

3: Import the dump file into database.

impdp sample/samplepwd@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=impdpEMP_DEPT.log

More thoughts:

    I would also like to note about the old utility tool exp and imp.

    The syntax is slightly different. Without creating a reusable directory, in exp and imp, you have to specify explicitly of the directory where your dump file is saved to.

exp sample/samplepwd@db10g tables=EMP,DEPT file="C:\EMP_DEPT.dmp" log="C:\impdpEMP_DEPT.log"
imp sample/samplepwd@db10g tables=EMP,DEPT file="C:\EMP_DEPT.dmp" log="C:\impdpEMP_DEPT.log"

  Error: ORA-39143: dump file “M:\SAMPLE.dmp” may be an original export dump file


     The above problem happened whenever you try to use the Import Data Pump client (impdp) to import a dumpfile that was created with the original Export client (exp).

Side Topic on Launching Maestro workflow from a link

By defaut the behavior of launching the maestro workflow would ask users to go to task console and select the workflow and then click on Launch button

looks like below: (The link to access task console is : http://siteurl/maestro/taskconsole)

Usually once you install maestro workflow, it will dynamicall create a task console link on the navigation bar.

For example, if I want to lauch my membership application workflow, I would need to get to the task console link and initiate the workflow over here.


So then once you refresh the page, the workflow you have selected would appear under the task console table and display a status as active meaning the workflow has started.


Now the question is : how would we initiate a workflow from a direct link on the navigation bar?

So in this case, look at the picture 1-1, on the navigation menu bar, the link “Apply for the membership” should initiate the membership application workflow for us.

To acheive this goal, I created a customized module “launchmaestrolinks”. The module could be at below:

Note: This module is a put-together work from a few gurus and the intitial idea is learned from the drupal forum.

Download the module from here…

After the module is downloaded, here are a few things you need to do to install it!

  1. Copy the zip file to [site_corecode]/sites/all/modules
  2. Unzip the file to install the module.
  3. Enable the module “Launch Maestro Links” 

          Login to your site with administrator priviledges, and then go to Administration->Modules. You should find a simlar screen as below:


           Make sure you click the checkbox to enable the module and then save the configuration.

        4.  Once you have enabled the module, go to navigation bar to create and configure the maestrow workflow links.

             Before you create the navigation link for the maestrow workflow, make sure you know about the template number of the maestrow workflow you are referring to.

             For example you could locate the template id number by going to Administration->Structure->Maestro Workflows. In this case, I am going to create the direct link on the navigation bar for my Membership Application Workflow. The template id number is 1.


            Now we go to Administration-> Structure->Menus

            Click on the operation “list links” beside Navigation.

            We could simply add a link by clicking on  at the top of the list links.

            Note: the link path should follow the standard as : mastrolinks/<template_id>. So in my example, it would be 1.


               Now you could see the link “Apply for the Membership” would show up at the navigation bar and once you click that link.

               It will automatically kick off the workflow: Membership Applicaiton Workflow.

               End of the instructions and have fun with the new module.

               Merry Christmas to everyone!!

Backup exsiting data and load data dump into Database

Export current data dump as a backup.

   Method 1: Use utility exp

  • Login to the Database Server.
  • Start a windows command prompt, click on Start > Run
  • Type in cmd in the dialog box and click on OK.
  • Change directory to the root C:\ >

Type in the following command:

exp <user_toexport>/<user_toexport_password>

file=<directory>\dumpFileName.dmp log=<directory>\logFileName.log owner=<to_user> buffer=10000

Press [Enter]

The backup dump file will be found in the directory you specified.

For example: the following command is to export sample data from SAMPLE database:

exp sample/sample file=C:\sample.dmp log=C:\sample.log owner=sample buffer=100000

   Method 2: using Data Pump expdp

  • Login to the Database Server.
  • Start a windows command prompt, click on Start > Run
  • Type in cmd in the dialog box and click on OK.
  • Type in the following command to connect to the SAMPLE database

SQLPLUS system/<system password>

Press [Enter]

e.g. SQLPLUS system/<system password>

Execute the following commands to create a database directory. This directory must point to a valid directory on the same server as the database:

SQL> CREATE or REPLACE DIRECTORY <directory_name> AS ‘<directory\folder>\‘;

Directory created.

SQL> GRANT READ, WRITE on directory <directory_name> to <user_toexport>/

e.g.  CREATE or REPLACE DIRECTORY imp_dir as ‘D:\db_dump’;

GRANT READ, WRITE on directory imp_dir to bisbtm;

·         Create a folder under directory

·         Type in the following command:

expdp <user_toexport>/<user_toexport_password>

directory=< directory_name> dumpfile=dumpFileNam.dmp 

e.g. expdp sample/sample directory=imp_dir dumpfile=samp.dmp

Press [Enter]

The backup dump file will be found in the directory you specified.

To be continued….


Tweaking within Drupal 7

Warning: fileowner(): stat failed for temporary://upd16E2.tmp in update_manager_local_transfers_allowed() (line 932 of C:\Program Files (x86)\EasyPHP-12.0\www\aapi\modules\update\update.manager.inc).
The error occurs under the page of installing a new module like below:


Reason: The error displayed is caused by the incorrect tmp directory that is set up in the file system. I intentionally copied my website data from one server to the other server. And the orginal tmp folder directory is not applicable for the same location on the new server.

Solution: Go to Home-> Administration->Media->File System. Change the corresponding field.

Missing Manage Fields Link on Content Type Pages

Go to Modules and search for Field UI, make sure the module is enabled.

Drupal site running slow and out of memory issue

You might run into some errors as below complaining the memory exhaustion.

One cause would be not having enough php memory. You should look at the php.ini and locate the key word “memory_limit

The code snippet below is from my local php.ini. You could see it only has 128M as the memory. I increase it to 1024M. The error goes away and the drupal site is running a lot faster.

Use customized font in drupal 7

I assume that you have already chosen your best theme and you are willing to add a little bit more variety in fonts besides using the default ones that are provided by your current theme.

One method I would recommend is using google web fonts. They have abundant collections of the most popular fonts frequently used by different websites.

How to use it in Drupal Theme?

  1. Go through the choices of fonts from here: http://www.google.com/webfonts#
  2. Find style.css under your theme folder usually at: [site_corecode]/themes/<theme_name>

          If you could not find your theme under this directory, try the other dir at: [site_corecode]/sites/all/themes/<theme_name.

    3.   Modify style.css and add the following code at the beginning of the file.

          For example: @import url(http://fonts.googleapis.com/css?family=Oswald);

         You should find the @import code from Step 1. Pick the font and then click on  at the right bottom corner of that font.

         And then on the quick use page, you could see the step 2:Choose the character sets you want:. Find the option: @import and copy the code.

   4. Now you should be able to use the imported fonts anywhere by referring to the font-family like below:

     font-family: ‘Droid Sans‘, sans-serif;

    The other method is to use customized fonts you have downloaded and refer it at your style.css using @font-face.

     Suppose the font you are looking forward to using is Sansation_Light and you have already downloaded the ttf file : Sansation_Light.ttf

     Put this ttf file under the location : [site_corecode]/sites/default/files/

     And then add the following code at the beginning of the style.css file.

font-family: myFirstFont;
src: url('/<site_name>/sites/default/files/Sansation_Light.ttf');

     So then whenever you want to use this font, you could simply replace the font-family to myFirstFont.

Side Topic on Maestro Workflow with Drupal 7

When I was building the demo website for the clients using maestro workflow in drupal 7, I have encouted with the following errors:

   Only variables should be passed by reference in maestro_accept_reject()


   By looking at their code of the maestro_common.module snippet at blow, there seems no problem with the code itself. However, the release of PHP 4.4 and 5.0.5, a change has been made to the engine that has resulted in some new errors popping up in existing code. The errors showing above didnot hurt the basic functionality but creating annoying user experience.

   After researching around for fixing the problem while and understanding php developers’ determinative attitude on pushing the new change in, I have figured out a way to reformat the code above to get the php engine happy.

   So now the error goes away by having the code reformatted as below:

   Hope this would help people that has suffered the same problem!


Create a readonly LDAP Bind DN with Oracle OID

Although the Oracle Directory Manager is a powerful tool, as the application server administrator you will probably find it easier to use the web based tool oiddas or the OID Self Service Console.  The OID Self Service Console (SSC) is part of the Delegated Administration Services.  This tool is much easier to use when managing a user. 

1. login to Oracle Identity Management Self-Service Console(OIDDAS)

To access SSC,open your browser and point to the infrastructure OHS port, and add the oiddas directory to the URL.


2. Once you click login, since our environment is a SSO-Enabled environment. It would
transfer you to the SSO login page. Here you have to use the orcladmin binding

3. Click OK, then you would be able to login to oiddas like below:

4. Click Directory tab on this page

5. Click Create to create a new user called readonly. Fill in the basic information
of this user.

6. Once you click submit, you could be able to search out the user under the

7. Click privileges to set the required permissions for this user. For now, we don’t
set anything in order for it to be read only.

8. Test if we could use the account to bind to our current LDAP Server.

Possible Issues and solutions:

This issue is because DSA service is not started. Check the status of the current settings.See the pic below:

But in fact, when you use ./opmnctl startall
The components: DSA, LogLoader,dcm-daemon WON’T be automatically started. You have to start them one by one by using the following command:

opmnctl startproc ias-component=dcm-daemon
opmnctl startproc ias-component=dsa
opmnctl startproc ias-component=LogLoader

Improve Database performance by analyzing tables

A little bit background why we need to analyze tables, indexes, clusters:

         Sometimes we have changes for the tables, for example if we have regular ETL process that constantly modifies the database structure or table contents. The statistics that Oracle collects from the last table analyzation might be out of sync with the current data dictionary. And this information is often used by the optimizer when querying. Therefore querys base against the old statistics would run slow and the database performance could be decreased because of the changed statistic not being collected accordingly.

       What statistics does Oracle collect by the analyzation?

         Oracle will collect statistic on the number of rows, the number of empty data blocks, the number of blocks below the high water mark, the average data block free space, the average row length, and the number of chained rows in a table when the Oracle ANALYZE TABLE command is performed.Oracle ANALYZE TABLE can be used to collect statistics on a specific table.

          When using Oracle ANALYZE TABLE all domain indexes marked LOADING or FAILED will be skipped.

          Oracle will also calculate PCT_ACESS_DIRECT statistics for index-organized tables when using Oracle ANALYZE TABLE

         Note:Before analyzing a table with the Oracle ANALYZE TABLE command you must create function based indexes on the table.

        Two options for analyzing tables: Computes Statistics VS Estimate Statistics

Computes Statistics Estimate Statistics
Method Full Table Scan Sampling
Accuracy High Depends on the sample
Cost High Low

        Computer Statistics Analyzation uses the full table scan again the entire Oracle table. Upon completion of the command, the data dictionary would be updated with the highly accurate statistics by the cost-based optimizer.However, Estimate Statistics Analyzaiton would take samples from the table and the samples are stored in the data dictionary.

       Note: You need to weight the time and database resource against the accuracy of the statistics before considering which method should take at your current point of view. For small to medium tables, use analyze table table_name compute statistics. For larger tables take a sample (for example, analyze table table_name estimate statistics sample 20 percent).

         Utilize TOAD for table analyzations

          Login Toad with your database connection. Select the table you want to analyze and right click on Analyze Table.


    Once you get to the analyzation page, select the tables you want to do analyzation on and click on the green arrow to start analyzing the table.By default the method would be estimate statistics as shown in the picture below.

The success information is displayed after the analyzation.

You could also change the mode to be compute statistics by switching to the Options tab and modify the Analyze Functions as shown below:

              Utilize SQLPLUS for table analyzations

             Login to SQLPLUS with your database connection as below:

            A better solution : Utilize DBMS_STATS to collect statistics

Here is a code snippet for how we are gonna invoke the dbms_stats package in 10g.Just note that Oracle Analyze table commands are now considered to be old fashioned. And the dbms_stats package are used more and more frequently in packages because it provides high quality infomation about tables and indexes.






options => 'GATHER AUTO',

estimate_percent => DBMS_STATS.auto_sample_size,

method_opt=>'for all columns size repeat',

cascade => true,




For a more detailed information on the features that Oracle 11g Database has offered. Please take a look at the guru’s blog as below: