Marketing Deal Offers

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg
Showing posts with label in-memory database. Show all posts
Showing posts with label in-memory database. Show all posts

Saturday, 10 October 2009

Beautiful BABI: SiSense PrismCubed Offers Business Intelligence for Business Analysts

Posted on 13:08 by Unknown
Summary: SiSense PrismCubed offers a reasonable option for a business-analyst business intelligence system. It’s probably a little harder to use than some competitors, but gives a bit more power and flexibility in return.

SiSense PrismCubed, officially launched this past August, is another member of the growing set of business intelligence systems aimed at empowering business analysts to build their own applications. I’ve also written about QlikView and Lyza, and think there are others.

What distinguishes these tools from other business intelligence systems is they let non-technical users manipulate source data in more sophisticated ways than a spreadsheet or report writer. Specifically, data from several sources can be merged on a common key, filtered, aggregated and processed through complex formulas.

This sort of manipulation has traditionally required SQL programmers, OLAP cube designers, or similar technical experts. Allowing business analysts to do it without having to learn deep technical skills is precisely what lets them build applications with minimal external assistance. (I say "minimal" because technical staff must still handle connections to the source data.)

These systems also provide report creation and distribution. But unlike business-analyst data manipulation, those capabilities are also found in other business intelligence products.

You’ll note that my definition does NOT specify a particular database technology, such as in-memory or columnar, that the data is updated in real time, that the system is targeted at mid-sized businesses, or that results are distributed pervasively through the organization. Those have all been proposed as ways to classify business intelligence systems, and several of the products in my business-analyst business intelligence (BABI--how cute!) category fall into one or another such group. But I think it’s a mistake to focus on those other features because they don’t get at real value provided by these tools, which is the flowering of applications made possible when business analysts can create them independently.

Now that I've defined a new type of application, complete with the all-important acronym, the next step is defining an evaluation framework to help compare the competitors. I’d like to claim I do this through deep research and brilliant insights into user needs, but, in fact, I generally start with the features in the existing systems. This runs the risk of missing some critical requirement that no vendor has yet uncovered, but it saves a ton of work. And I can still argue that I’m piggybacking on the vendors’ own deep research and insights as embodied in their products.

In any event, a starter set of review criteria for BABI systems (sorry, but I find the acronym irresistible) would include:

- combine data from multiple, heterogeneous sources (relational databases, CSV files, Excel tables, etc.)

- allow non-technical users to define processing flows to manipulate the data (merge, filter, aggregate, calculate)

- present the manipulated data in a structure that’s suitable for reporting and visualization

- allow non-technical users to create applications including reports, visualizations, and (optionally) additional functions such as data refresh and export

- allow other users to view (and optionally interact with) the applications

- meet reasonable performance standards for data load, storage, response time and scalability

- use appropriate technology (actually, I don’t care if the thing is powered by hamsters. But understanding the underlying technology helps to predict where problems might arise.)

- affordable pricing (not exactly a criterion, but important nevertheless)

Obviously these criteria could be much more detailed, and no doubt they will grow over time. But for now, they provide a useful way to look at PrismCubed.

1. Combine data from multiple sources: PrismCubed provides a wizard to connect with different data sources, including SQL Server, Oracle, CSV files, Excel and Amazon S3 logs (which earns them extra coolness points). The system can read the database schemas directly, saving users the need to define basic data structures. Users have the option modify structures if they desire. A connection can be live (i.e., the source is requeried each time a report is run) or reloaded on demand from within a completed application. This provides real-time data access, which isn’t always available in business intelligence systems. The system can also reload data automatically on a user-specified schedule.

2. Allow non-technical users to manipulate source data: PrismCubed does a particularly good job here. At a basic level, users can write complex formulas to add derived fields to a table during the import process. More important, a drag-and-drop interface lets them build complex visual processing flows from standard icons including data definition, filtering, inclusion or exclusion, unions, and top or bottom selects. These flows can combine multiple data sources and include branches that generate separate output sets that are all available to use in applications.

3. Present the manipulated data for reporting: the system automatically classifies input data as dimensions (text, dates, etc.) and measures (numbers which can be aggregated). Users can override the system’s assignments and can add new dimension fields during the data load. They can create derived measures at any time. Once the load is complete, the system presents the dimensions and measures in an “ElastiCube” available for reports and other applications.

4. Create reports and other applications: the system provides a remarkably rich development environment. Users build applications by dropping different types of objects (which the vendor calls widgets) onto dashboard pages. Widgets can make selections; display data in pivot tables, charts, calendars, and images; and execute actions including refresh data, jump to different pages, query external data sources, edit data, and export to Excel. A dashboard can have multiple pages.

The primary reporting widget is the pivot table, which itself is built by dragging dimensions into rows and columns, and the measures into cell values. Users can apply filters to widgets, such as selecting the top 10 values for a dimension. These filters can be static (a fixed list) or dynamic (reselected each time the dashboard is updated). PrismCubed also provides special features for time series calculations such as period-to-period growth and differences. That's a nice touch, because those can be quite difficult to define with conventional reporting systems.

Reporting widgets can be connected to the ElastiCube dimensions and measures or directly to SQL data sources. Users can also specify whether selections made in one widget affect the data displayed in other widgets. There are actually three options here, including complete independence, direct links from one widget to another, and global impact on other widgets. This gives more flexibility than systems that automatically apply global selections, but does force users to do more work in specifying which approach they want.

Widgets, filters and other components can be stored in a central repository and reused across applications.

5. Share applications: Users can export dashboard contents to Excel tables or can copy an entire dashboard as a static PDF. Applications, including underlying ElastiCubes, can be copied and run on another user’s PC. In addition, a Web server due for release this month (October) will let dashboard creators publish their dashboards to a central server, where other users will be able to access and modify them. The server will provide fine-grained control over what different users are allowed to change.

6. Scalability and Performance: SiSense has tested the PrismCubed engine on multiple terabytes of data. It cited one client who loaded 30 million telephone call detail records in 30 to 90 minutes. Loaded data usually takes somewhat less disk space than the original source. The system currently requires a complete reload to add new data to an existing ElastiCube, although the vendor plans to add incremental appends by the end of November. Once the data is loaded, reports within applications usually update in seconds.

7. Technology: PrismCubed stores data in a columnar data structure. It also stores a dimension map for each column, but doesn’t preaggregate the data along the dimensions. As with other columnar databases, this avoids the need for specialized data structures to handle particular queries. When data has not been preloaded into the system, PrismCubed can also run the same query across multiple external data sources.

Although PrismCubed stores the entire ElastiCube on disk, it only loads into memory the columns required for a particular query. This lets it can handle larger data sets than purely in-memory systems without massive hardware. There might be some problems if the selected columns for a query exceeded the system’s available memory.

PrismCubed runs on Windows PCs with the .NET framework installed. On 64 bit systems, this means the amount of potential memory is virtually unlimited. Although PrismCubed itself is new, a previous version of the product using the ElastiCube database engine was launched in September 2008 and has more than 6,000 users.

8. Pricing: PrismCubed is priced on an annual subscription basis, which is unusual for this type of product but common among hosted BI vendors. SisSense offers several versions of PrismCubed, ranging from a free Viewer that can only access dashboards created elsewhere, to a $1,500 per year Professional edition that allows full creation of dashboards and ElastiCubes. There are also a free version (limited to 2,000 rows of data), a $300 per year Personal edition (which can create dashboards but not share them), and a $700 per year Analyzer that can build and share dashboards but not ElastiCubes. Server pricing wasn’t quite set when I spoke with SiSense but will probably be around $3,500 per year per server.

These prices are quite reasonable compared with similar vendors, even considering the recurring annual subscription fees, particularly because the end-user Viewer is free. Price details are published on the vendor’s Web site.
Read More
Posted in business intelligence software, columnar database, in-memory database, sisense prismcubed | No comments

Wednesday, 10 June 2009

QlikView 9.0 Reaches for Broader Business Intelligence Market

Posted on 15:37 by Unknown
QlikTech released version 9 of its QlikView business intelligence software today. The product has been in public beta for several months, so the general features are well known to people who care about such things.

Probably the item that attracted the most advance attention is an iPhone version that supports interactive analysis; this also works for other Java Mobile clients like Blackberry. It's cool (or ‘qool’, if you must) but not so important in the grand scheme of things. More significant changes include:

- availability through the Amazon Elastic Compute Cloud (EC2), which lets companies order up a QlikView-equipped server in minutes. (Of course, they still have to purchase a QlikView license.) Users can also expand or reduce the number of servers to match fluctuating needs. Advantages including avoiding the wait for new hardware, no need to physically install a server, and the ability to meet peak demands without making a fixed investment.

- API for real-time updates of in-memory data. This is an extension of previous changes that allowed incremental batch updates and manual data entry. But it still marks a major step towards letting QlikView run time-critical applications such as stock trade analysis, pricing and inventory management. No one will be processing orders on QlikView (hmm, never say never), but the line between analytical and transaction databases just got that much thinner.

- enhanced support for enterprise-level deployments. This includes centralized control panels for multiple servers; load balancing and fail-over; better thin-client support; multi-billion-row data sets; and more efficient calculations. These are critical as QlikView moves from being a departmental solution run primarily by business analysts to a mission-critical system backed by corporate IT.

- free Personal Edition with full development capabilities. The main limit vs. the licensed version is that Personal Edition cannot read QlikView files developed on any other copy of the software, and no one else can read files that Personal Editon generates. The goal is to make it easier for users to try the system on their own – a continuation of the company's long-standing "seeing is believing" strategy.

- functional enhancements including improved visualization, search and automation functions. These are nice but none seemed especially exciting. Changes in previous recent releases, such as set analysis (simultaneously comparing two sets of selected records) were more fundamental. Remember, we're talking about version 9: the system is already quite polished.

Of all these items, the one I found most thought-provoking was the free Personal Edition, which replaces a 15-day free trial. Removing the time limit let users build QlikView into their regular work. The strategy makes sense, but it doesn’t lower the $30,000 - $50,000 investment required for the smallest licensed QlikView installation. Few analysts, who are the most likely users for Personal Edition, have the clout to sponsor so large an investment. Competing analyst tools such as LyzaSoft, ADVIZOR Solutions and Tableau can generally provide a 5-10 user departmental deployment for under $10,000. Although QlikView is vastly more powerful than the others, the lower cost will give them an initial advantage. And once they’re in place, it’s hard to get a company to switch.

On the other hand, maybe QlikView is really moving to compete with traditional business intelligence tools like Cognos, Business Objects and MicroStrategy. QikView’s entry cost is vastly lower than those products, especially once you consider the savings in labor. But most enterprises have a BI tool already in place, so it’s not a matter of comparing entry costs. Rather, the choice is entry cost for QlikView vs. incremental deployment cost on the incumbent. The labor savings with QlikView are so great that it will still be cheaper for many projects. But QlikView will remain be a tough sell because IT departments are reluctant to invest in the staff training needed to support an additional tool.

QlikView will never fully replace the traditional data warehouse and BI tools because its in-memory approach limits the size of its databases. With 64 bit systems, the product can easily handle dozens of gigabytes of data. This is quite a lot, but even the smallest enterprise data warehouses now hold multiple terabytes. QlikView works with such systems by executing SQL queries against them, pulling down limited data sets, loading these into memory, and analyzing them. That’s an excellent and perfectly viable approach, but it does rely on the warehouse being there in the first place.

None of this is to suggest that QlikView has anything but a very bright future. When I first spoke with the company in 2005, it had just reached 2,000 clients; at last count, it had over 11,000. Revenue for 2008 was $120 million and had risen 50% from the previous year. The product has finally attracted attention from analyst firms like Gartner and Aberdeen and is very well rated in Nigel Pendse’s latest BI Survey. My brief fling as a VAR ended two years ago, but I still use it personally for any non-trivial data analysis work and remain absurdly pleased with the results. I won’t say QlikView is better than sex, but its pleasures are equally difficult to describe to the uninitiated. Anyone interested in BI software who hasn’t given it a try (QlikView, not sex) should download a copy and see what they’ve been missing.
Read More
Posted in business intelligence software, in-memory database, qliktech, qlikview | No comments

Thursday, 16 April 2009

Lyzasoft: Independence for Analysts and Maybe Some Light on Shadow IT

Posted on 13:03 by Unknown
Long-time readers of this blog know that I have a deep fondness for QlikView as a tool that lets business analysts do work that would otherwise require IT support. QlikView has a very fast, scalable database and excellent tools to create reports and graphs. But quite a few other systems offer at least one of these.*

What really sets QlikView apart is its scripting language, which lets analysts build processing streams to combine and transform multiple data sources. Although QlikView is far from comparable with enterprise-class data integration tools like Informatica, its scripts allow sophisticated data preparation that is vastly too complex to repeat regularly in Excel. (See my post What Makes QlikTech So Good for more on this.)

Lyzasoft Lyza is the first product I’ve seen that might give QlikView a serious run for its money. Lyza doesn’t have scripts, but users can achieve similar goals by building step-by-step process flows to merge and transform multiple data sources. The flows support different kinds of joins and Excel-style formulas, including if statements and comparisons to adjacent rows. This gives Lyza enough power to do most of the manipulations an analyst would want in cleaning and extending a data set.

Lyza also has the unique and important advantage of letting users view the actual data at every step in the flow, the way they’d see rows on a spreadsheet. This makes it vastly easier to build a flow that does what you want. The flows can also produce reports, including tables and different kinds of graphs, which would typically be the final result of an analysis project.

All of that is quite impressive and makes for a beautiful demonstration. But plenty of systems can do cool things on small volumes of data – basically, they throw the data into memory and go nuts. Everything about Lyza, from its cartoonish logo to its desktop-only deployment to the online store selling at a sub-$1,000 price point, led me to expect the same. I figured this would be another nice tool for little data sets – which to me means 50,000 to 100,000 rows – and nothing more.

But it seems that’s not the case. Lyzasoft CEO Scott Davis tells me the system regularly runs data sets with tens of millions of rows and the biggest he’s used is 591 million rows and around 7.5-8 GB.

A good part of the trick is that Lyza is NOT an in-memory database. This means it’s not bound by the workstation’s memory limits. Instead, Lyza uses a columnar structure with indexes on non-numeric fields. This lets it read required data from the disk very quickly. Davis also said that in practice most users either summarize or sample very large data sets early in their data flows to get down to more manageable volumes.

Summarizing the data seems a lot like cheating when you’re talking about scalability, so that didn’t leave me very convinced. But you can download a free 30 day trial of Lyza, which let me test it myself.

Bottom line: my embarrassingly ancient desktop (2.8 GHz CPU, 2 GB RAM, Windows XP) loaded a 400 MB CSV file with about 430,000 rows in just over 6 minutes. That’s somewhat painful, but it does suggest you could load 4 GB in an hour – a practical if not exactly desirable period. The real issue is that each subsequent step could take similar amounts of time: copying my 400 MB set to a second step took a little over 2 minutes and, more worrisome, subsequent filters took the same 2 minutes even though they reduced the record count to 85,000 then 7,000 then 50. This means a complete processing flow on a large data set could run for hours.

Still, a typical real-world scenario would be to do development work on small samples, and then only run a really big flow once you knew you had it right. So even the load time for subsequent steps is not necessarily a show-stopper.

Better news is that rerunning an existing filter with slightly different criteria took just a few seconds, and even rerunning the existing flow from the start was much faster than the first time through. Users can also rerun all steps after a given point in the flow. This works because Lyza saves the intermediate data sets. It means that analysts can efficiently explore changes or extend an existing project without waiting for the entire flow to re-execute. It’s not as nice as running everything on a lightning-fast data server, but most analysts would find it gives them all the power they need.

As a point of comparison, loading that same 400 MB CSV file took almost 11 minutes with QlikView. I had forgotten how slowly QlikView loads text files, particularly on my limited CPU. On the other hand, loading a 100 MB Excel spreadsheet took about 90 seconds for Lyza vs. 13 seconds in QlikView. QlikView also compressed the 400 MB to 22 MB on disk and about 50 MB in memory, whereas Lyza more than doubled data to 960 MB of disk, due mostly to indexes. Memory consumption in Lyza rose only about 10 MB.

Of course, compression ratios for both QlikView and Lyza depend greatly on the nature of the data. This particular set had lots of blanks and Y/N fields. The result was much more compression than I usually see in QlikView and, I suspect, more expansion than usual in Lyza. In general, Lyza seems to make little use of data compression, which is usually a key advantage of columnar databases. Although this seems like a problem today, it also means there's an obvious opportunity for improvement as the system finds itself dealing with larger data sets.

What I think this boils down to is that Lyza can effectively handle multi-gigabyte data volumes on a desktop system. The only reason I’m not being more definite is I did see a lot of pauses, most accompanied by 100% CPU utilization, and occasional spikes in memory usage that I could only resolve by closing the software and, once or twice, by rebooting. This happened when I was working with small files as well as the large ones. It might have been the auto-save function, my old hardware, crowded disk drives, or Windows XP. On the other hand, Lyza is a young product (released September 2008) with only a dozen or so clients, so bugs would not be surprising. I'm certainly not ready to say Lyza doesn't have them.

Tracking down bugs will be harder because Lyza also runs on Linux and Mac systems. In fact, judging by the Mac-like interface, I suspect it wasn't developed on a Windows platform. According to Davis, performance isn’t very sensitive to adding memory beyond 1 GB, but high speed disk drives do help once you get past 10 million rows or so. The absolute limit on a 32 bit system is about 2 billion rows, a constraint related to addressable memory space (2^31 = about 2 billion) rather than anything peculiar to Lyza. Lyza can also run on 64 bit servers and is certified on Intel multi-core systems.

Enough about scalability. I haven’t done justice to Lyza’s interface, which is quite good. Most actions involve dragging objects into place, whether to add a new step to a process flow, move a field from one flow stage to the next, or drop measures and dimensions onto a report layout. Being able to see the data and reports instantly is tremendously helpful when building a complex processing flow, particularly if you’re exploring the data or trying to understand a problem at the same time. This is exactly how most analysts work.

Lyza also provides basic statistical functions including descriptive statistics, correlation and Z-test scores, a mean vs. standard deviation plot, and stepwise regression. This is nothing for SAS or SPSS to worry about; in fact, even Excel has more options. But it’s enough for most purposes. Similarly, data visualization is limited compared to a Tableau or ADVIZOR, but allows some interactive analysis and is more than adequate for day-to-day purposes.

Users can combine several reports onto a single dashboard, adding titles and effects similar to a Powerpoint slide. The report remains connected to the original workflow but doesn’t update automatically when the flow is rerun.

Intriguingly, Lyza can also display the lineage of a table or chart value. It traces the data from its source through all subsequent workflow steps, listing any transformations or selections applied along the way. Davis sees this as quickly answering the ever-popular question, “Where did that number come from?” Presumably this will leave more time to discuss American Idol.


Users can also link one workflow to another by simply dragging an object onto a new worksheet. This is a very powerful feature, since it lets users break big workflows into pieces and lets one workflow feed data into several others. The company has just taken this one step further by adding a collaboration server, Lyza Commons, that lets different users share workflows and reports. Reports show which users send and receive data from other users, as well as which data sets send and receive information from other data sets.

Those reports are more than just neat: they're documenting data flows that are otherwise lost in the “shadow IT” which exists outside of formal systems in most organizations. Combined with lineage tracing, this is where IT departments and auditors should start to find Lyza really interesting.

A future version of Commons will also let non-Lyza users view Lyza reports over the Web – further extending Lyza beyond the analyst’s personal desktop to be an enterprise resource. Add in the 64-bit capability, an API to call Lyza from other systems, and some other tricks the company isn’t ready to discuss in public, and there’s potential here to be much more than a productivity tool for analysts.

This brings us back to pricing. If you were reading closely, you noticed that little comment about Lyza being priced under $1,000. Actually there are two versions: a $199 Lyza Lite that only loads from Microsoft Excel, Access and text files, and the $899 regular version that can also connect to standard relational databases and other ODBC sources and includes the API.

This isn’t quite as cheap as it sounds because these are one year subscriptions. But even so, it is an entry cost well below the several tens of thousands of dollars you’d pay to get started with full versions of QlikView or ADVIZOR, and even a little cheaper than Tableau. The strategy of using analysts’ desktop as a beachhead is obvious, but that doesn’t make it any less effective.

So, should my friends at QlikView be worried? Not right away – QlikView is a vastly more mature product with many features and capabilities that Lyza doesn’t match, and probably can’t unless it switches to an in-memory database. But analysts are QlikView’s beachhead too, and there’s probably not enough room on their desktops for both systems. With a much lower entry price and enough scalability, data manipulation and analysis features to meet analysts’ basic needs, Lyza could be the easier one to pick. And that would make QlikView's growth much harder.

----------------------------

*ADVIZOR Solutions and Tableau Software have excellent visualization with an in-memory database, although they’re not so scalable. PivotLink, Birst and LucidEra are on-demand systems that are highly scalable, although their visualization is less sophisticated. Here are links to my reviews: ADVIZOR , Tableau, PivotLink, Birst and LucidEra.
Read More
Posted in analysis systems, business intelligence software, columnar database, data mining, in-memory database, on-demand software | No comments

Sunday, 12 April 2009

PivotLink: Flexible On-Demand Business Intelligence

Posted on 12:07 by Unknown
I did a Webinar recently (click here for slides) about on-demand business intelligence systems, sponsored by Birst. It boiled to two key points:

- most of the work in business intelligence is in assembling the underlying database, even though the term “BI systems” often refers to the query and reporting tools (a.k.a. the “presentation layer”).

- vendor strategies to simplify BI include: using simpler interfaces, automation or pre-built solutions to make conventional technology easier; or using inherently more efficient alternative technologies such as in-memory and columnar databases.

Naturally, the full Webinar was jammed with much other wisdom (you missed Time-Traveling Elvis and the Reese’s Peanut Butter Cup Fallacy). But those two points alone provide a useful framework for considering business intelligence systems in general.

I bring this up because I’m finally writing about PivotLink, which I looked at more than a month ago. It turns out that my framework helps to put PivotLink into perspective.

Here’s the thing. PivotLink is an on-demand business intelligence product. Its most interesting technical feature is an in-memory columnar database. If you follow the industry, you know that type of database is about the geek-sexiest thing out there right now. I myself find it totally fascinating and had a grand time digging into the details.

But the rest of the world doesn’t care if it’s geek-ilicious:* they want to know how PivotLink can help them. Here’s where the framework comes in, since it clarifies what PivotLink does and doesn’t do. Or, to put that a bit more formally, it shows which elements of a complete business intelligence system PivotLink provides.

The answer to that being, PivotLink works largely at the presentation layer. It can import data from multiple sources and join tables on common keys. But it won’t do the complicated transformations and fuzzy matching needed for serious data integration. This means that PivotLink must either work with data that's already been processed into a conventional data warehouse or can be usefully analyzed in its original state.

There’s actually more of this naturally-analyzable data than you might think. Purchase transactions, a common source for PivotLink, are a good example. The obstacle to working with these has often been the size of the data sets, which meant lots of expensive hardware and lots of (how to put this delicately?) deliberately-paced IT support. These are exactly the barriers that on-demand systems overcome.

This brings us back to PivotLink's technology. In-memory, columnar databases are especially well suited for on-demand business intelligence because they compress data tightly (10% of the original volume. according to PivotLink), read only the columns required for a particular query (providing faster response) and don’t require need special schemas or preaggregated data cubes (requiring less skill to set up and modify).

But even columnar systems vary in their details. PivotLink sits towards the flexible end of the spectrum, with support for incremental updates, many-to-many table relationships, and abilities to add new columns and merge data along different query paths without reloading it. The system also allows calculations during the data load and within queries, caches data in memory and further compresses it after an initial query, and supports user- and group-level security at the row, column or individual cell levels. Not all column-based systems can say the same.

On the other hand, PivotLink does not support standard SQL queries and doesn’t run on a massively parallel (“shared nothing”) architecture. Both limits are typical of older columnar databases, a reminder that PivotLink began life in 1998 as SeaTab Software. Although shared-nothing architectures are generally more scalable, PivotLink is already running data sets with more than 10 billion rows in its current configuration. Response is very fast: according to the company, one client with several billion rows of point-of-sale data runs a nightly update and then executes a summary report in under one minute. Still, PivotLink recognizes the benefits of shared-nothing systems and plans to move to that architecture by the end of 2009.

Lack of SQL compatibility means users must rely on PivotLink’s tools for queries and reports. These let administrators import data from CSV, TXT and Excel files and map them to PivotLink tables. (The actual storage is columnar, with different compression techniques applied to different data types. But to the user, the data looks like it’s organized in tables.) The system reads the source data and makes suggestions about field types and names, which the user can accept or revise.

Users then define queries against the tables. Each query contains a selected set of columns, which are identified either as a header (i.e., dimension) or metric. When queries involve multiple tables, the user also specifies the columns to join on. Each report is written against one query.

The distinction between tables and queries is important in PivotLink, because it provides much of the system’s flexibility. The same column can be a dimension in one query and a metric in another, and the same tables can be related on different keys in different queries. All this happens without reloading the data or running any aggregations. The metadata used to build reports is defined by the associated query, not the underlying tables.

Reports are built by dragging headers and metrics into place on a grid, defining drill paths, and selecting tabular or chart formats. Reports can also rank and sort results and select the top or bottom rows for each rank. For example, a report could rank the top ten stores per region by revenue. Users can combine several reports into a dashboard.

End-users can apply filters and drill into the selected elements within a report. However, PivotLink does not apply filters for one report to the rest of the dashboard, in the way of QlikView or ADVIZOR. This feature is also on the agenda for 2009.

PivotLink clients can import data, define queries and build reports without help from the company. PivotLink said it takes a couple of days to train an administrator to load the data and build queries, a day or two to train a user to build reports and dashboards, and minutes to hours to learn to use the reports. Based on what they showed me, that sounds about right. You can also find out for yourself by signing up for a free trial.

Pricing of PivotLink is based on the size of the deployment, taking into account data volume and types of users. It starts around $3,000 per month for 50 million rows. When I spoke with the company in early March, they had about 60 clients supporting over 6,000 users, and had doubled their volume in the past year.

-----------------------------------------------------------------------------------------
* geek-alicious? is it possible to misspell a word that doesn’t exist?
Read More
Posted in business intelligence, columnar database, in-memory database | No comments
Older Posts Home
Subscribe to: Comments (Atom)

Popular Posts

  • eBay Offers $2.4 Billion for GSI Commerce: More Support for Marketing Automation
    eBay ’s $2.4 billion offer for e-commerce services giant GSI Commerce has been described largely in terms of helping eBay to compete with ...
  • 4 Marketing Tech Trends To Watch in 2014
    I'm not a big fan of year-end summaries and forecasts, mostly because I produce summaries and forecasts all year round.  But I pulled to...
  • Selligent Brings a New B2C Marketing Automation Option to the U.S.
    I’m writing this post on my old DOS-based WordPerfect software, to get in the proper mood for discussing business-to-consumer marketing auto...
  • Infer Keeps It Simple: B2B Lead Scores and Nothing Else
    I’ve nearly finished gathering information from vendors for my new study on Customer Data Platform systems and have started to look for patt...
  • Demand Generation Usability Scores - Part 2
    Usability Items for Simple Marketing Programs (note: this is a slightly revised version of the original post, reflecting vendor feedback.) Y...
  • Demand Generation Usability Scores - Part 3
    Usability Items for Complex Marketing Programs (note: this is a slightly revised version of the original post, reflecting vendor feedback....
  • Gainsight Gives Customer Success Managers a Database of Their Own
    I had a conversation last week with a vendor whose pitch was all about providing execution systems with a shared database that contains a un...
  • LeadSpace Offers A No-Memory Approach to B2B Lead Scoring
    My discussion last week of Infer , Mintigo , and Lattice Engines raised the question of what other B2B data vendors might be considered C...
  • Oracle Real-Time Decisions Empowers Business Users
    One of the few dependable rules in the software industry is that Suites Win. When a market first develops, it is filled with “point solutio...
  • First Look at New Marketo Release
    I’m going to diverge just slightly from my current obsession with usability to talk about a conversation I had today with Marketo President...

Categories

  • [x+1]
  • 1010Data
  • 2009 trends
  • 2010 predictions
  • 2011 predictions
  • 2013 marketing automation revenues
  • 2014 predictions
  • account data in marketing systems
  • acquisitions
  • acquistions
  • act-on software
  • active conversion
  • activeconversion
  • acxiom
  • ad agencies
  • ad servers
  • adam needles
  • adobe
  • adometry
  • advertising effectiveness
  • advocate management
  • affiliate marketing
  • agilone
  • aida model
  • aimatch
  • algorithmic attribution
  • alterian
  • analysis systems
  • analytical database
  • analytical databases
  • analytical systems
  • analytics tools
  • app exchange
  • app marketplace
  • application design
  • aprimo
  • are
  • artificial intelligence
  • ascend2
  • asset management
  • assetlink
  • atg
  • attribution analysis
  • attribution models
  • automated decisions
  • automated dialog
  • automated modeling
  • autonomy
  • b2b demand generation
  • b2b demand generation systems
  • b2b email marketing benchmarks
  • b2b lead scoring
  • b2b marketing
  • b2b marketing automation
  • b2b marketing automation industry consolidation
  • b2b marketing automation industry growth rate
  • b2b marketing automation revenues
  • b2b marketing automation systems
  • b2b marketing automation vendor rankings
  • b2b marketing data
  • b2b marketing industry consolidation
  • b2b marketing strategy
  • b2b marketing system comparison
  • b2c marketing automation
  • b2c marketing automation vendors
  • balanced scorecard
  • balihoo
  • barriers to marketing success
  • barry devlin
  • beanstalk data
  • behavior detection
  • behavior identification
  • behavior targeting
  • behavioral data
  • behavioral targeting
  • big data
  • birst
  • bislr
  • blogging software
  • brand experience
  • brand marketing
  • business intelligence
  • business intelligence software
  • business intelligence systems
  • business marketing
  • businses case
  • callidus
  • campaign flow
  • campaign management
  • campaign management software
  • causata
  • cdi
  • cdp
  • channel management
  • channel marketing
  • channel partner management
  • chordiant
  • cio priorities
  • clickdimensions
  • clicksquared
  • clientxclient
  • cloud computing
  • cmo surveys
  • cms
  • collaboration software
  • column data store
  • column-oriented database
  • columnar database
  • community management
  • compare marketing automation vendors
  • compiled data
  • complex event processing
  • consumer marketing
  • contact center systems
  • content aggregation
  • content distribution
  • content grazing
  • content management
  • content marketing
  • content matrix
  • content recommendations
  • content selections
  • content syndication
  • context automation
  • conversen
  • coremetrics
  • crm
  • crm integration
  • CRM lead scores
  • crm software
  • crm systems
  • crmevolution
  • cross-channel marketing
  • crowd sourcing
  • custom content
  • custom media
  • customer database
  • customer analysis
  • customer data
  • customer data integration
  • customer data management
  • customer data platform
  • customer data platforms
  • customer data quality
  • customer data warehouse
  • customer database
  • customer experience
  • customer experience management
  • customer experience matrix
  • customer information
  • customer management
  • customer management software
  • customer management systems
  • customer metrics
  • customer relationship management
  • customer satisfaction
  • customer success
  • customer support
  • cxc matrix
  • dashboards
  • data analysis
  • data cleaning
  • data cleansing
  • data enhancement
  • data integration
  • data loading
  • data mining
  • data mining and terrorism
  • data quality
  • data transformation tools
  • data visualization
  • data warehouse
  • database management
  • database marketing
  • database marketing systems
  • database technology
  • dataflux
  • datallegro
  • datamentors
  • david raab
  • david raab webinar
  • david raab whitepaper
  • day software
  • decision engiens
  • decision engines
  • decision management
  • decision science
  • dell
  • demand generation
  • demand generation implementation
  • demand generation industry
  • demand generation industry growth rate
  • demand generation industry size
  • demand generation industry trends
  • demand generation marketbright
  • demand generation marketing automation
  • demand generation software
  • demand generation software revenues
  • demand generation systems
  • demand generation vendors
  • demandforce
  • digiday
  • digital marketing
  • digital marketing systems
  • digital messaging
  • distributed marketing
  • dmp
  • dreamforce
  • dreamforce 2012
  • dynamic content
  • ease of use
  • ebay
  • eglue
  • eloqua
  • eloqua10
  • elqoua ipo
  • email
  • email marketing
  • email service providers
  • engagement engine
  • enteprise marketing management
  • enterprise decision management
  • enterprise marketing management
  • enterprise software
  • entiera
  • epiphany
  • ETL
  • eTrigue
  • event detection
  • event stream processing
  • event-based marketing
  • exacttarget
  • facebook
  • feature checklists
  • flow charts
  • fractional attribution
  • freemium
  • future of marketing automation
  • g2crowd
  • gainsight
  • Genius.com
  • genoo
  • geotargeting
  • gleanster
  • governance
  • grosocial
  • gsi commerce
  • high performance analytics
  • hiring consultants
  • hosted software
  • hosted systems
  • hubspot
  • ibm
  • impact of internet on selling
  • importance of sales execution
  • in-memory database
  • in-site search
  • inbound marketing
  • industry consolidation
  • industry growth rate
  • industry size
  • industry trends
  • influitive
  • infor
  • information cards
  • infusioncon 2013
  • infusionsoft
  • innovation
  • integrated customer management
  • integrated marketing management
  • integrated marketing management systems
  • integrated marketing systems
  • integrated systems
  • intent measurement
  • interaction advisor
  • interaction management
  • interestbase
  • interwoven
  • intuit
  • IP address lookup
  • jbara
  • jesubi
  • king fish media
  • kwanzoo
  • kxen
  • kynetx
  • large company marketing automation
  • last click attribution
  • lead capture
  • lead generation
  • lead management
  • lead management software
  • lead management systems
  • lead managment
  • lead ranking
  • lead scoring
  • lead scoring models
  • leadforce1
  • leadformix
  • leading marketing automation systems
  • leadlander
  • leadlife
  • leadmd
  • leftbrain dga
  • lifecycle analysis
  • lifecycle reporting
  • lifetime value
  • lifetime value model
  • local marketing automation
  • loopfuse
  • low cost marketing software
  • low-cost marketing software
  • loyalty systems
  • lyzasoft
  • makesbridge
  • manticore technology
  • mapreduce
  • market consolidation
  • market software
  • market2lead
  • marketbight
  • marketbright
  • marketgenius
  • marketing analysis
  • marketing analytics
  • marketing and sales integration
  • marketing automation
  • marketing automation adoption
  • marketing automation benefits
  • marketing automation consolidation
  • marketing automation cost
  • marketing automation deployment
  • marketing automation features
  • marketing automation industry
  • marketing automation industry growth rate
  • marketing automation industry trends
  • marketing automation market share
  • marketing automation market size
  • marketing automation maturity model
  • marketing automation net promoter score. marketing automation effectiveness
  • marketing automation pricing
  • marketing automation software
  • marketing automation software evaluation
  • marketing automation success factors
  • marketing automation system deployment
  • marketing automation system evaluation
  • marketing automation system features
  • marketing automation system selection
  • marketing automation system usage
  • marketing automation systems
  • marketing automation trends
  • marketing automation user satisfaction
  • marketing automation vendor financials
  • marketing automation vendor selection
  • marketing automation vendor strategies
  • marketing automion
  • marketing best practices
  • marketing cloud
  • marketing content
  • marketing data
  • marketing data management
  • marketing database
  • marketing database management
  • marketing education
  • marketing execution
  • marketing funnel
  • marketing integration
  • marketing lead stages
  • marketing management
  • marketing measurement
  • marketing mix models
  • marketing operating system
  • marketing operations
  • marketing optimization
  • marketing performance
  • marketing performance measurement
  • marketing platforms
  • marketing priorities
  • marketing process
  • marketing process optimization
  • marketing resource management
  • marketing return on investment
  • marketing ROI
  • marketing sales alignment
  • marketing service providers
  • marketing services
  • marketing services providers
  • marketing skills gap
  • marketing software
  • marketing software evaluation
  • marketing software industry trends
  • marketing software product reviews
  • marketing software selection
  • marketing software trends
  • marketing softwware
  • marketing suites
  • marketing system architecture
  • marketing system evaluation
  • marketing system ROI
  • marketing system selection
  • marketing systems
  • marketing technology
  • marketing tests
  • marketing tips
  • marketing to sales alignment
  • marketing training
  • marketing trends
  • marketing-sales integration
  • marketingpilot
  • marketo
  • marketo funding
  • marketo ipo
  • master data management
  • matching
  • maturity model
  • meaning based marketing
  • media mix models
  • message customization
  • metrics
  • micro-business marketing software
  • microsoft
  • microsoft dynamics crm
  • mid-tier marketing systems
  • mindmatrix
  • mintigo
  • mma
  • mobile marketing
  • mpm toolkit
  • multi-channel marketing
  • multi-language marketing
  • multivariate testing
  • natural language processing
  • neolane
  • net promoter score
  • network link analysis
  • next best action
  • nice systems
  • nimble crm
  • number of clients
  • nurture programs
  • officeautopilot
  • omnichannel marketing
  • omniture
  • on-demand
  • on-demand business intelligence
  • on-demand software
  • on-premise software
  • online advertising
  • online advertising optimization
  • online analytics
  • online marketing
  • open source bi
  • open source software
  • optimization
  • optimove
  • oracle
  • paraccel
  • pardot
  • pardot acquisition
  • partner relationship management
  • pay per click
  • pay per response
  • pedowitz group
  • pegasystems
  • performable
  • performance marketing
  • personalization
  • pitney bowes
  • portrait software
  • predictive analytics
  • predictive lead scoring
  • predictive modeling
  • privacy
  • prospect database
  • prospecting
  • qliktech
  • qlikview
  • qlikview price
  • raab guide
  • raab report
  • raab survey
  • Raab VEST
  • Raab VEST report
  • raab webinar
  • reachedge
  • reachforce
  • real time decision management
  • real time interaction management
  • real-time decisions
  • real-time interaction management
  • realtime decisions
  • recommendation engines
  • relationship analysis
  • reporting software
  • request for proposal
  • reseller marketing automation
  • response attribution
  • revenue attribution
  • revenue generation
  • revenue performance management
  • rfm scores
  • rightnow
  • rightwave
  • roi reporting
  • role of experts
  • rule-based systems
  • saas software
  • saffron technology
  • sales automation
  • sales best practices
  • sales enablement
  • sales force automation
  • sales funnel
  • sales lead management association
  • sales leads
  • sales process
  • sales prospecting
  • salesforce acquires exacttarget
  • salesforce.com
  • salesgenius
  • sap
  • sas
  • score cards
  • search engine optimization
  • search engines
  • self-optimizing systems
  • selligent
  • semantic analysis
  • semantic analytics
  • sentiment analysis
  • service oriented architecture
  • setlogik
  • setlogik acquisition
  • silverpop
  • silverpop engage
  • silverpop engage b2b
  • simulation
  • sisense prismcubed
  • sitecore
  • small business marketing
  • small business software
  • smarter commerce
  • smartfocus
  • soa
  • social campaign management
  • social crm
  • social marketing
  • social marketing automation
  • social marketing management
  • social media
  • social media marketing
  • social media measurement
  • social media monitoring
  • social media roi
  • social network data
  • software as a service
  • software costs
  • software deployment
  • software evaluation
  • software satisfaction
  • software selection
  • software usability
  • software usability measurement
  • Spredfast
  • stage-based measurement
  • state-based systems
  • surveillance technology
  • sweet suite
  • swyft
  • sybase iq
  • system deployment
  • system design
  • system implementation
  • system requirements
  • system selection
  • tableau software
  • technology infrastructure
  • techrigy
  • Tenbase
  • teradata
  • test design
  • text analysis
  • training
  • treehouse international
  • trigger marketing
  • twitter
  • unica
  • universal behaviors
  • unstructured data
  • usability assessment
  • user interface
  • vendor comparison
  • vendor evaluation
  • vendor evaluation comparison
  • vendor rankings
  • vendor selection
  • vendor services
  • venntive
  • vertica
  • visualiq
  • vocus
  • vtrenz
  • web analytics
  • web contact management
  • Web content management
  • web data analysis
  • web marketing
  • web personalization
  • Web site design
  • whatsnexx
  • woopra
  • youcalc
  • zoho
  • zoomix

Blog Archive

  • ▼  2013 (55)
    • ▼  December (4)
      • 4 Marketing Tech Trends To Watch in 2014
      • Webinar, December 18: How Marketers Can (Finally) ...
      • Woopra Grows from Web Analytics to Multi-Source Cu...
      • Optimove Helps Optimize Customer Retention (And, Y...
    • ►  November (5)
    • ►  October (4)
    • ►  September (3)
    • ►  August (5)
    • ►  July (5)
    • ►  June (5)
    • ►  May (6)
    • ►  April (6)
    • ►  March (1)
    • ►  February (6)
    • ►  January (5)
  • ►  2012 (56)
    • ►  December (4)
    • ►  November (3)
    • ►  October (6)
    • ►  September (4)
    • ►  August (7)
    • ►  July (3)
    • ►  June (4)
    • ►  May (5)
    • ►  April (3)
    • ►  March (4)
    • ►  February (8)
    • ►  January (5)
  • ►  2011 (74)
    • ►  December (9)
    • ►  November (8)
    • ►  October (6)
    • ►  September (5)
    • ►  August (5)
    • ►  July (3)
    • ►  June (6)
    • ►  May (5)
    • ►  April (6)
    • ►  March (8)
    • ►  February (7)
    • ►  January (6)
  • ►  2010 (75)
    • ►  December (9)
    • ►  November (9)
    • ►  October (5)
    • ►  September (6)
    • ►  August (7)
    • ►  July (3)
    • ►  June (6)
    • ►  May (9)
    • ►  April (4)
    • ►  March (6)
    • ►  February (6)
    • ►  January (5)
  • ►  2009 (96)
    • ►  December (2)
    • ►  November (4)
    • ►  October (5)
    • ►  September (9)
    • ►  August (7)
    • ►  July (16)
    • ►  June (9)
    • ►  May (5)
    • ►  April (11)
    • ►  March (11)
    • ►  February (11)
    • ►  January (6)
  • ►  2008 (59)
    • ►  December (6)
    • ►  November (3)
    • ►  October (8)
    • ►  September (1)
    • ►  August (5)
    • ►  July (8)
    • ►  June (5)
    • ►  May (5)
    • ►  April (6)
    • ►  March (3)
    • ►  February (3)
    • ►  January (6)
  • ►  2007 (84)
    • ►  December (4)
    • ►  November (6)
    • ►  October (6)
    • ►  September (1)
    • ►  August (4)
    • ►  July (7)
    • ►  June (16)
    • ►  May (20)
    • ►  April (20)
Powered by Blogger.

About Me

Unknown
View my complete profile