Business Objects – Best Practice – Report Design
The purpose of this document is to provide a description of what I consider to be best practice when designing a business intelligence report. I will also try to describe the reasoning behind the suggestion.
photo credit: ArtemFinland
Business Objects – Best Practice – Universe Design
The purpose of this document is to provide a description of what I consider to be best practice when designing a Business Objects Universe. We will also describe the reasoning behind the suggestion.
I was recently asked by a customer what steps we take to secure their files. I always use TrueCrypt to encrypt all customer files. This is an overview of how you can use TrueCrypt.
TrueCrypt gives you the ability to store information in a hidden partition that is secure from prying eyes.
Just as importantly the software is easy to use, being almost seamless in use.
On my most recent project I am again designing another data warehouse solution.
My tool of choice for data modelling has always been CA’s ERwin DataModeler http://www.ca.com/us/products/product.aspx?ID=260
This is great, but we run into problems when trying to reverse engineer an existing database as this means that either my laptop has to go onto the customers network; that’s never going to happen. or the customer has to buy a copy of ERWin to use on their desktop.
My most recent customer is a dedicated Oracle customer using Database, Data Guard, RAC, Forms, OBIEE, Answers and on and on.
The obvious choice would be Oracle Designer. I have not used this in a few years but the last time it took me two days to get it installed and configured.
A quick search pointed me to an early adopters release of SQL Developer Data Modelling, you can find it here : http://www.oracle.com/technology/products/database/sql_developer/files/releasenotes525.htm
Here are a few of my thoughts:
This is really obvious with hindsight but it took me ages before “the penny dropped”.
Most of the time I find myself using the “Filtered Rows” option on my selectors.
The main selector that I want to use is the “Filter”, but strangely this does not have the option of filtered rows.
What I have resorted to doing in the past is to use multiple selectors:
A date drop down list box filters the rows from “Raw Data” sheet to the “Monthly” sheet.
A product drop down list box filters the rows from the “Monthly” sheet to the “MonthlyProducts” sheet.
A customer drop down list box filters the rows from the “MonthlyProducts” sheet to the “MonthlyProductsCustomers” sheet which now lists all of the sales made for that combination.
The filter component could do that, but this component assumes that there will only be a single row in the output. As soon as you have multiple rows in your result set then you can’t use the filter component.
Ahhh, but you can with a little lateral thinking; that’s what Xcelsius is all about.
I have spent most of today throwing data around in MS Excel.
One of the things that I have had to do is to convert percentages into real numbers.
In excel percentages are stored as fractions, for example 50% is stored as 0.5, 75% as 0.75 etc.
I am trying to chart these figures in Business Objects Xcelsius but the y-axis labels are just showing 0 to 0 hence why i need to convert my numbers.
I have just found a really easy way to do this.
- In the target cells, enter the value 100 in each cell.
- Copy the source cells
- Select “Paste Special” using the options “Values” and “Multiply”
This takes the source value 0.75 multiplies it by 100 and stores the result 75 !
I have used pase special – values often but I have never seen the need for the multiply option until now !
You may want to check out some of my other Excel related posts here.
I have spent all day in MS Excel.
I receive excel spreadsheets in a “fairly” fixed format.
I need to get this data into Business Objects Xcelsius via another spreadsheet that stores historical records and keeps the data in a format optimised for display.
“Normally” I would be using Data integrator (DI) to move the data from the source, store the data in a database table or two, use query as a web service (QAAWS) or Live Office to pull the data from the database directly into Xcelsius.
Here we do not have access to any of these luxuries.
I have written a translation spreadsheet that I can point to a specific source and it will re-shape the data so that it can fit my target spreadsheet. Then it’s just a case of copy and then paste values….
To make this process easier I am using the “INDIRECT” function in excel.
=INDIRECT("[" & $H$1 & "]Data!" & AU$1 & $C17) * 100
Here the current cell will grab the vaue from the workbook named in H1, go to the “Data” spreadsheet then grab the row and column attributes from AU1 and C17 then multiply the number by 100.
This means that next month I just need to change the name of my source spreadsheet and it will all just work. If a row or column “moves” in the source, I just update the metadata (AU1 or C17) in this case.
It takes a long time to set up and to check but next month it should be a breeze.
One of the major features being “re-introduced” to the XI product family is Publishing.
This gives you the ability to write a single report, that runs a single query but sends a different “version” of the report to each user.
This is only scraping the surface of the full Publisher functionality, lets have a look at houw you can create your first user targeted publications.
When in doubt, don’t fiddle.
I “forgot” the admin password for my Business Objects XI 3.0 (Demo) environment.
Some searching turned up a “hack” that would resolve this, which involves deleting data from the repository database. One quick delete later and my installation is broken.
Of course if this was in the “real world”
- I would have a backup Admin user.
- All my passwords would be stored in my KeyPass database.
- Before modifying the database I would have taken a backup.
No worries, I just did an uninstall and re-install – a couple of hours work, but it is work I didn’t have to redo.
Lesson Learned – Treat even your demo system like production!
Had a “strange” problem last week that really stumped me for longer than it should have.
My colleague had run a query to list some data and was getting approximately 200,000 rows back. We were investigating why a unique constraint was being violated so I took his query and did a count distinct to try and find the duplicates. I got 30,000 distinct values.
This really confused me, I was expecting only a handfull of duplicates, if any. Then I noticed that the first query already WAS a distinct list !
select distinct col1_id, col2_id from tablea
select count(distinct col1_id + ‘-‘ + col2_id) from tablea
I concatenated the fields together so I am counting a single field, I included the separator so that I could differentiate 1-11 from 11-1.
See if you can spot what the problem is before you read on.