Pushing Data From Excel To Power BI Using Streaming Datasets

Very interesting Article!

Chris Webb's BI Blog

One Power BI feature that almost passed me by (because it was released in August while I was on holiday) was the ability to create streaming datasets in the Power BI web app and push data to them via the Power BI REST API. This blog post has the announcement:
The documentation is here:
And Charles Sterling has an example of how to use it with Flow and PowerApps here:

However, when I played around with this I found there were a few things that were either confusing or not properly documented, so I thought it would be useful to give an example of how to use this functionality to automatically synch data from a table in Excel to Power BI using a Power Query query.

Creating the streaming dataset in Power BI

Imagine that you have a table called Sales in an Excel workbook on your…

View original post 1.143 altre parole

Columnstore Indexes SQL 2016

The columnstore index is the standard for storing and querying large data warehousing fact tables. It uses column-based data storage and query processing to achieve up to 10x query performance gains in your data warehouse over traditional row-oriented storage, and up to 10x data compression over the uncompressed data size. Beginning with SQL Server 2016, columnstore indexes enable operational analytics, the ability to run performant real-time analytics on a transactional workload.

These are key terms and concepts are associated with columnstore indexes.


A columnstore is data that is logically organized as a table with rows and columns, and physically stored in a column-wise data format.


A rowstore is data that is logically organized as a table with rows and columns, and then physically stored in a row-wise data format. This has been the traditional way to store relational table data. In SQL Server, rowstore refers to table where the underlying data storage format is a heap, a clustered index, or a memory-optimized table.

column segment

A column segment is a column of data from within the rowgroup.

Delete bitmap:

cancellations are logical, are erased when you recalculate

Delta rowgroups:

inserted rows that have yet to be compressed

Clustered Columnstore Index

They can even be created on tables in memory that does not. it’s handled as all other indices. Through the Delay feature is possible to force the delay after a certain time instead of reaching the number of rows. You can also Filter data and exclude  certain indices values (eg the deleted parts)

How to write it:

— Create a clustered columnstore index on disk-based table.
ON [database_name. [schema_name ] . | schema_name . ] table_name
[ WITH ( < with_option> [ ,…n ] ) ]
[ ON <on_option> ]
[ ; ]

— create a durable (data will be persisted) memory-optimized table
— two of the columns are indexed
CREATE TABLE dbo.ShoppingCart (
TotalPrice MONEY


Here there are the principal differences between sql 2014 and sql 2016 :

Columnstore Index Feature SQL Server 2014 SQL Server 2016
Batch execution for multi-threaded queries yes yes
Batch execution for single-threaded queries yes
Archival compression option. yes yes
Snapshot isolation and read-committed snapshot isolation yes
Specify columnstore index when creating a table. yes
AlwaysOn supports columnstore indexes. yes yes
AlwaysOn readable secondary supports read-only nonclustered columnstore index yes yes
AlwaysOn readable secondary supports updateable columnstore indexes. yes
Read-only nonclustered columnstore index on heap or btree. yes yes*
Updateable nonclustered columnstore index on heap or btree yes
Additional btree indexes allowed on a heap or btree that has a nonclustered columnstore index. yes yes
Updateable clustered columnstore index. yes yes
Btree index on a clustered columnstore index. yes
Columnstore index on a memory-optimized table. yes
Nonclustered columnstore index definition supports using a filtered condition. yes
Compression delay option for columnstore indexes in CREATE TABLE and ALTER TABLE. yes

Power Pivot, decimal separator error

When working with Decimal number data in PowerPivot, you might see different general error during the import of new file or when import it the decimal number is without comma or point.

These issues can occur when the data source in your workbook is a comma separated values (CSV) file that was created on an operating system (OS) with an English locale and being accessed on an OS with a non-English locale. The Microsoft Jet Database Engine is unable to directly handle certain values in the data.

It cannot handle certain values when the format in the Regional and Language settings of the OS is set to use a character other than a period (.) as a decimal separator. For example, your OS settings may be set to use a comma (,) for a decimal separator.

To resolve this issue, do one of the following tasks:

Replace the decimal separator:  Teplace the decimal separator in the CSV file with another character, such as a semicolon (;). To do this, use the find and replace feature in the text editor you use to open the CSV file, find all characters that are used as a decimal separator, such as a period, and replace with a semicolon. It’s works oly if you csv si very small and simple to edit.

Create a schema.ini file: You can create a standard schema.ini file that will tell Jet how to read the CSV file, regardless of the regional settings, it’s work with all csv type of file:

  • First create e normal txt file and convert it to .ini ( rename the extension)
  • Call the file Schema.ini and save the schema.ini file in the same directory as the CSV file.
  • Include a line that defines the current decimal separator as used in the CSV Using the example above, the first two entries in your schema.ini file will look like the following:


the first row is the nam of you csv, the second one set the character for your decimal symbol in your csv. You can also set the column name, his type and leght using these string:

Col1=CustomerNumber Text Width 10
Col2=CustomerName Text Width 30

Col is the position of colum( from left to right in you csv.

For more information about the schema.ini file, go to the MSDN website: http://msdn.microsoft.com/en-us/library/ms709353%28VS.85%29.aspx.




Power BI Diagnostics, Trace Logs And Query Execution Times (Again)

Chris Webb's BI Blog

I’ve blogged a few times now about how to monitor Power BI query execution times, most recently here, and while using functions like DateTime.FixedLocalNow() is all very well I’ve never been happy with the fact that you have to alter your queries in order to be able to time them. Using the Power BI trace logs always seemed a much better option – and since the log files are in a reasonably easy to understand text format, you can use a Power BI query to load data from them. I wrote a post on importing data from the trace logs (using Power Query) here back in 2014 and Rui Romano has a great post on doing the same thing for Power BI here. The big problem with the Power BI trace logs, though, is that there is too much information in them: they’re really meant for Microsoft internal…

View original post 980 altre parole

DAX Year to Date in Previous/Prior Year

How Are We Doing THIS Year Versus the Same Time LAST Year?

This is a pretty common question, and a pretty common need.  But there’s no DAX function that just DOES this.

For instance, getting a“Year to Date” calculation for, say, Total Sales, is pretty straightforward:

[YTD Sales] = CALCULATE([Total Sales], DATESYTD(Calendar[Date]))

Back to the point of this post:  it turns out that you can “nest” a DATESYTD inside of a DATEADD!

Try this:

[Prev Yr YTD Sales] = CALCULATE([Total Sales], DATEADD(DATESYTD(Calendar[Date]),-1,Year))

We have two requirement:

  • You must have a well-constructed Calendar/Dates table
  • You need to then USE that Calendar/Dates table on your pivot table

if you want more info click this link