Saturday, December 31, 2016

Power BI Content Packs

Power BI Content packs allows users to package your dashboards, reports and datasets and share it with entire organisation or specific groups (users within organisation). This also allows the users to enhance the dashboards within the content packs.

Power BI Content Packs are available only if the users have Power BI Pro. Publishing an organizational content pack adds it to the content pack gallery. This centralized repository makes it easy for members to browse and discover dashboards, reports, and datasets published for them.

Create and publish a content pack -

Retail Analysis Sample Content pack -

Technorati Tags: ,

Tuesday, December 27, 2016

A function 'CALCULATE' has been used in a True/False expression that is used as a table filter expression. This is not allowed.

One of the restrictions of using CALCULATETABLE to restrict rows is that the filter cannot be applied to calculated measures. Instead FILTER context need to be used for measures.

Technorati Tags: ,

Sunday, December 18, 2016

Azure Analysis Services

AzureAnalysisServicesAzure Analysis Services is an enterprise grade OLAP engine and BI modeling platform, offered as a fully managed platform-as-a-service (PaaS). It provides capabilities to get your semantic models on to the cloud and to handle spikes and demand. It is built on SQL Server Analysis Services. It is compatible with SQL Server 2016 Analysis Services Enterprise Edition and supports Tabular models 1200 compatibility.
Developers can also use the existing tools, SQL Server Data Tools for Visual Studio and SQL Server Management Studio to manage the Azure Analysis Services models.
Developers can create a server in seconds and use Azure Active Directory to manage user identity and role based security.
The on-premises data gateway acts as a bridge, providing secure data transfer between on-premises data sources and your Azure Analysis Services server in the cloud.
Azure Analysis Services – Architecture
Technorati Tags: ,

Power BI Embedded

Azure Service to add built-in Analytics to Web and Mobile applications. Allows users to easily author interactive reports without writing single line of code using Power BI Desktop. No logins or Office 365 AD Accounts required.
With Power BI Embedded there is no need to register the application in Azure Active Directory and no need for users to login to access the reports. If required users can be authenticated through Forms or application specific authentication.PowerBIEmbeddedSample
Like any other service in Azure, resources for Power BI Embedded are provisioned through the Azure Resource Manager APIs
a) First you need to provision a Power BI Workspace Collection. It can be created either manually using the Azure Portal or programmatically using the Azure Resource Manager APIs
b) Download and Unzip the sample from GitHub.
c) Provision a workspace in the existing Workspace collection created in step a by following the instructions on
d) Finally import your PBIX file and then you can run your sample application to see your PBIX embedded in the application.
Technorati Tags: ,

Access Denied error when trying to import Power BI Desktop report using Power BI Embedded – ProvisionSample

When you try to import the Power BI Desktop report using the ProvisionSample ( you might be asked for File Path and make sure to include the complete path including .pbix file. e.g. c:\temp\Test.pbix and also make sure the .pbix is closed before doing the import.

Saturday, December 10, 2016

Apache Hadoop – Ecosystem

hadooplogoApache Hadoop is an excellent framework for processing, storing and analyzing large volumes of unstructured data - aka Big Data. But getting a handle on all the project’s myriad components and sub-components, with names like Pig and Mahout, can be a difficult.


Hadoop Distributed File System: HDFS, the storage layer of Hadoop, is a distributed, scalable, Java-based file system adept at storing large volumes of unstructured data.

Batch Processing

MaPReduce - MapReduce is a software framework that serves as the compute layer of Hadoop. MapReduce jobs are divided into two (obviously named) parts. The “Map” function divides a query into multiple parts and processes data at the node level. The “Reduce” function aggregates the results of the “Map” function to determine the “answer” to the query.

Data Access

Hive: Hive is a Hadoop-based data warehousing-like framework originally developed by Facebook. It allows users to write queries in a SQL-like language caled HiveQL, which are then converted to MapReduce. This allows SQL programmers with no MapReduce experience to use the warehouse and makes it easier to integrate with business intelligence and visualization tools such as Microstrategy, Tableau, Revolutions Analytics, etc.
Pig: Pig Latin is a Hadoop-based language developed by Yahoo. It is relatively easy to learn and is adept at very deep, very long data pipelines (a limitation of SQL.)
HCatalog: HCatalog is a centralized metadata management and sharing service for Apache Hadoop. It allows for a unified view of all data in Hadoop clusters and allows diverse tools, including Pig and Hive, to process any data elements without needing to know physically where in the cluster the data is stored.
Tez - Extensible framework for building high performance batch and interactive data processing applications, coordinated by YARN in Apache Hadoop. Tez improves the MapReduce paradigm by dramatically improving its speed, while maintaining MapReduce’s ability to scale to petabytes of data


HBase: HBase is a non-relational database that allows for low-latency, quick lookups in Hadoop. It adds transactional capabilities to Hadoop, allowing users to conduct updates, inserts and deletes. EBay and Facebook use HBase heavily.

Data Transfer

Sqoop: Sqoop is a connectivity tool for moving data from non-Hadoop data stores – such as relational databases and data warehouses – into Hadoop. It allows users to specify the target location inside of Hadoop and instruct Sqoop to move data from Oracle, Teradata or other relational databases to the target.


Storm - Apache Storm is a free and open source distributed realtime computation system. Storm makes it easy to reliably process unbounded streams of data, doing for real-time processing what Hadoop did for batch processing. Storm is simple, can be used with any programming language, and is a lot of fun to use
Flume: Flume is a framework for populating Hadoop with data. Agents are populated throughout ones IT infrastructure – inside web servers, application servers and mobile devices, for example – to collect data and integrate it into Hadoop.


Oozie: Oozie is a workflow processing system that lets users define a series of jobs written in multiple languages – such as Map Reduce, Pig and Hive -- then intelligently link them to one another. Oozie allows users to specify, for example, that a particular query is only to be initiated after specified previous jobs on which it relies for data are completed.


Avro: Avro is a data serialization system that allows for encoding the schema of Hadoop files. It is adept at parsing data and performing removed procedure calls.

Machine Learning

Mahout: Mahout is a data mining library. It takes the most popular data mining algorithms for performing clustering, regression testing and statistical modeling and implements them using the Map Reduce model.

Management Ops

Ambari: Ambari is a web-based set of tools for deploying, administering and monitoring Apache Hadoop clusters. It's development is being led by engineers from Hortonworoks, which include Ambari in its Hortonworks Data Platform.
BigTop: BigTop is an effort to create a more formal process or framework for packaging and interoperability testing of Hadoop's sub-projects and related components with the goal improving the Hadoop platform as a whole.

Technorati Tags: ,