Parameterization in Centerprise Data Integrator

Parameters play a very important role in reusability and configurability of dataflows. An extensive parameterization capability ensures that dataflows and workflows can be invoked in multiple situations, saving time and enhancing return on investment.

A common scenario would be if you wanted to use an existing dataflow for a file that has the same structure but data from a different source. This would be the perfect opportunity to use parameters.

In this example, we will change the source file to a different file and change the parameters to specify an effective date for our data quality rules.

We begin by dragging and dropping the parameter onto the dataflow, then open the parameter property dialog box.


We specify a new parameter and call it “effective date.” Chose the data type and give it a default value of December 31.


Once the specifications are set, the parameter is available for mapping.


In this example the data quality rule was working on property tax and checking whether the property tax was zero or not.


Now we want to add an effective date. We want to apply this parameter to our data quality rule to say that it won’t start until the effective date is matched and we want to specify this effective date from outside the dataflow. So we go ahead and do the mapping so the data quality rule has the effective date. Next, we go to the data quality rules dialog box and check “if effective date is greater than today, then always return true, otherwise, check for this rule.”


That means that it is going to check this rule only when it becomes effective. You can specify any effective date from outside now and control its behavior, so this data quality rule is now dependent on a specific date.

We can then take this file and in the job scheduler schedule a new job and point to the newly created dataflow with parameters. When we go to the job parameters tab we can see all the implicit and explicit parameters.


If we select our user-defined parameter, we can see the specified default value of December 31.


Say we decide we don’t want this rule to be effective until March 31. We can select that date from the calendar on the right side.


This tells the application not to use the data quality rule before March 31. That is how the behavior of the dataflow can be controlled from outside the dataflow.

Implicitly, the software has scanned and has figured out that the source has two file paths: loans and tax.


I can point to a different file and change to a different file path.


The same thing can be done on the destination side, enabling you to use the same flow for a totally different set of data.

You can see parameterization and other useful getting started videos on Astera TV at

ReportMiner Has Been Named a 2015 Trend-Setting Product by KMWorld Magazine

KM World Trend Setting Product 2015We are excited to share with our blog readers that our industry-leading ReportMiner data extraction software has been named a 2015 Trend-Setting Product by KMWorld Magazine!

KMWorld Editor-in-Chief Hugh McKellar commented that, “In each and every case, the thoughtfulness and elegance of the software certainly warrants deep examination. Depending on customer needs, the products on the list can dramatically boost organizational performance. The products identified fulfill the ultimate goal of knowledge management—delivering the right information to the right people at the right time.”

The panel, which consists of editorial colleagues, market and technology analysts, KM theoreticians, practitioners, customers and a select few savvy users (in a variety of disciplines), reviewed more than 200 vendors, whose combined product lineups include more than 1,000 separate offerings.

ReportMiner’s user-friendly interface enables business users with little or no technical background to easily accomplish a wide range of data extraction tasks without employing expensive IT resources. Smart features such as automated name and address parsing and auto creation of data extraction patternsautomate many time-consuming manual tasks, saving time and increasing data quality. You can find out more at

We Have a Winner!

contest winnerWe just finished our second campaign to post customer reviews on a software reviews portal and we are excited to announce that Phil Nacamuli of Leximantix has won the iPad drawing. We had a great response to this campaign and want to thank all of our customers who took the time to post a review of Centerprise or ReportMiner.

You can access all of our reviews on our customer testimonial page. Here are some that stood out for us in this latest campaign:

Prasad Sunkara of Vish Group

“Centerprise at its best!”

Centerprise at its best! Easy for business users. We were able to train new employees and get them to speed within a week. Using Centerprise as an ETL tool is working out great. Especially working with uncommon data sources! The tool is easy to use, very affordable to own and returns are very high, including increased speed of delivery of data, business user-driven data delivery and rapid prototyping.

Dawn Bauer of Farmers Mutual Hail Insurance

“Fast and Simple. All-in-one package”

Centerprise is perfect for dumping data from an ODS system into a data warehouse for reporting. It is fast, easy and simple to use. You can have a dataflow up and running in mere minutes compared to some other tools on the market.

Don Smith of Software Solutions

“WOW Why did we wait soooo long?”

Centerprise allows us to create a reusable template to standardize the information that we use to validate conversions. Standardizing the reporting data from two systems is an awesome strength that can be developed out of Centerprise. We have seen that the use of this package saved us 2 hours per conversion but now since the templates are already developed it is saving us more. This reduces our lead time for conversions, and increases our efficiencies as a conversion team.

Mario Ferrer of Achievers

 “Astera Centerprise rules!”

I love Centerprise and I believe it has a lot of potential. It’s incredibly easy to use. People with no previous ETL experience can start building simple mappings very quickly. It’s incredibly easy to install and maintain. After I first installed Centerprise, I was able to start working in just a few minutes. I am particularly impressed with how much Centerprise simplifies transformations that require more work in other ETL tools. The perfect example is the SCD transformation, which handles all the logic in an Slowly Changing Dimension. Even with a market leader like Informatica, the SCD logic has to be manually built. With Centerprise this can be built in just a few minutes, and it impacts every mapping.

Centerprise Best Practices: Working With the High Volume Data Warehouse

Data warehouses and data marts provide the business intelligence needed for timely and accurate business decisions. But data warehousing comes with a unique set of challenges revolving around huge volumes of data and maintenance of data keys.
warehouse bp wpCenterprise is the ideal solution for transferring and transforming large amounts of records from a transactional database to a data warehouse. It provides all the functionality needed for today’s demanding data storage and analysis requirements, including sophisticated ETL features that ensure data quality, superior performance, usability, scalability, change data capture for fast throughput, and wide connectivity to popular databases and file formats.

A new whitepaper from Astera provides best practices to be kept in mind during the entirety of the development process in order to make certain data warehousing projects will be successful. Topics include data quality, data profiling, validation, logging, translating into star schema, options for related tables, and performance considerations. Download your free copy here!

Rule-Based Filtering for Export in ReportMiner

Often when exporting data from an extraction process, only certain information is needed. It can be a time-consuming and complex process to export all the extracted data and then delete the unwanted data from the destination.

ReportMiner solves this problem in a quick and easy way with its rule-based filtering for export feature. All you need to do is create your export setting and then type in your rule-based filter in the expression window as shown shown in the figure below, then verify the rule by clicking on the compile button. In this case, the user only wanted to export data for sofas, so the expression is ITEM = SOFA.

Screen Shot 2015-09-08 at 1.41.20 PM

ReportMiner will export only the records that meet the criteria of your expression. In this case, two records that pertain to sofas were exported.

Screen Shot 2015-09-08 at 1.43.08 PM

To learn more about this feature, view our Rule-Based Filtering From Export Settings video, part of our ReportMiner Tutorial Series at

Saving Time and Ensuring Data Quality with ReportMiner Automatic Name and Address Parsing

Many times people have a single address field from a data source that has all the address information in the one field. They need to parse out the individual sections of the address into separate fields so it can be loaded into a database and/or combined with information from different sources. Often there are thousands of records that need to be parsed and to do this manually is a time-consuming and error-prone task, putting your data quality and reliability at risk.

Astera’s ReportMiner data extraction software automatically parses name and address data with a few simple clicks, ensuring your data quality and saving you resource time and money.

ReportMiner breaks up name and address data into separate components such as Name: prefix, first, middle, last, suffix and Address: street, suite, city, state, zip, country.

Once your Data Region has been created, you simply highlight the name area, right click and select “Add Name Field.” You do the same for addresses: Highlight the address area, right click and select “Add Address Field (US).” ReportMiner will automatically create your name and address fields by breaking them up into individual fields.


For more information on creating data regions and fields in ReportMiner, check out our blog Smart Data Extraction with ReportMiner: Automating Creation of Extraction Models.

Extract Valuable Data from PDFs With ReportMiner

PDF (portable document format) files were developed in the early 1990s to enable computer users with different platforms and software tools to share documents with a fixed layout of text and graphics. Because they are independent of application software, hardware, and operating systems, PDFs have become a popular way to share documents. All that is needed is a PDF reader, available for free download on the Internet.

In this day and age, however, data lives on, even if it’s trapped inside a PDF. Businesses need PDF data to combine with other data and use in spreadsheets or databases, and integrate it with other applications or use it for business intelligence.

Astera’s ReportMiner data extraction software offers many capabilities for PDF data extraction in an easy-to-use interface that doesn’t require code writing. The tool enables users to easily extract data by simply creating an extraction layout and exporting to the destination of their choice. ReportMiner does all the heavy lifting by automatically recognizing data patterns and creating necessary data regions and fields.

In addition, users are able to use their extracted data to take advantage of product’s advanced transformation, quality, and scrubbing features.

To extract information from a PDF file in ReportMiner, simply upload a pdf and create a report model by selecting what needs to be extracted and specifying a pattern within the report.

ReportMiner also has a preview feature so that users can make sure everything is being extracted as intended. Once the layout is complete, users have the option to export to Excel, CSV, or a chosen database. The report model can also be opened in a dataflow to apply transformations to the data.

For more information on specifying regions and fields and exporting data, check out these blogs:

Smart Data Extraction with ReportMiner: Automating Creation of Extraction Models

Exporting Data in ReportMiner

Human-Readable Reports and the Data Trapped Within

data extraction wpOften reports are produced with the intention that they will be printed and read by human eyes. In today’s data-driven world, however, some or all of that physical data needs to be transformed into electronic data that can be integrated into enterprise applications for operational and business intelligence use.

IT is usually tasked with extracting the important data trapped within human-readable reports. This complex process involves coding and writing scripts that identify data patterns in the underlying reports. Since typically the requirements for what data needs to be extracted and how it will be used comes from the business side of the enterprise, the process also involves multiple back-and-forth rounds between the business department and IT.

There must be an easier way. How about a software-based solution that automatically extracts the desired data and can be used by the business department with little or no IT involvement? What would such a solution look like?

Our new whitepaper describes the anatomy of a solution that eliminates the complexity of traditional data extraction methods. You can download it free here.


Exporting Data in ReportMiner

Once you’ve built your extraction model in ReportMiner to extract desired data from unstructured sources such as printed or spool documents, you need to send it somewhere so that it becomes meaningful and useful to your business.   With ReportMiner, you can map and export data to almost any destination you want, including databases like SQL Server, Access, MySQL, PostgreSQL, and any ODBC-compatible database, as well as formats such as fixed length, delimited, Excel, and XML.

In this blog we’ll show you how to quickly export your data to an Excel file, where you’ll be able to analyze it, add it to your database, and if it is important over the long term, to your data warehouse.

After you’ve prepared your extraction model, you need to make sure everything is set up correctly for exporting. You do this by selecting the Preview icon and checking in the preview window to make sure everything is the way you’d like it to look.

When you are sure your setup is correct, choose the Excel icon: Create New Export Setting and Run (to Excel).

The pop up window will enable you to specify where you want to save the file.

Once you’ve saved the file, there are options available, including First Row Contains Header, specifying the worksheet if you have multiple worksheets in your Excel file, or appending to a file that already exists.

After selecting your options, a new popup will allow you to change the layout of your file. For example, you might want to make changes to the column header such as spacing, change the order of the fields, or change formats such as date format.

After exporting, you can check to make sure all your records were exported in the progress window on the left side. On the right side, there is a link to open the file in Excel.

There you can see all of your exported data in spreadsheet format and check to make sure all your changes to headers, field position, etc. are reflected in the exported data.

That’s it! You’re done!