Python: Clean SQL views for Documentation

One of the most tedious tasks of working with databases is to write and maintain documentation, in particular writing reports from tables and views.

So why not try to make this task a bit less heavy by using Python?

Step0-The input and the output

The script requires that the views are created using create view.

Right click on view


Copy view


Output view after python script


Step1 – Extract Columns Function
def extract_columns(in_txt):
    """Simple function to extract columns between select and from"""
    # Splitting by select i only choose text after select
    out_view = re.split(r'(?i)\bselect\b',in_txt)[1]
    # Splitting by from i only choose text before from
    out_view = re.split(r'(?i)\bfrom\b',out_view)[0]
    # Cleaning extra spaces and tabs
    return out_view

Let’s start by defining a function with the purpose of extracting only the columns from the script of a view. Thus ignoring everything before the Select statement and everything after the From statement.

Step2 -Let’s start from the variables
import re

# We open the file containing the view
view = extract_columns(open('input.txt','r',encoding='utf-8').read())
# Now we insert text into the list and we create a new list
view_l = view.split('\n')
out_l = []

Now we declare the variables we are going to use.

  • view: contains the input text
  • view_l: is a list created by splitting text by line
  • out_l: is the output list

Step3-Main Code
# Now we loop through all the lines of the view
for num,line in enumerate(view_l):
    # Clean extra spaces
    line = line.strip()
    # Remove Comments
    line = line.split('--')[0]
    # Remove Commas
    line = line.replace(',','',1)
    # Substitute Tabs with Spaces
    line = line.replace('\t',' ')
    # While two spaces are in line we substitute the with one space
    while '  ' in line:
        line = line.replace('  ',' ')
    # Remove [ and ]
    line = line.replace ('[','').replace(']','')
    # If line is not empty
    if len(line)>0:
        # If line is not a comment
        if line[0]!='-':
            # We add the new line to out_l

Finally we come to the main code.
The first part is a For loop, using the list we created in step2.
For each line the script removes useless characters and comments and then adds the line to the new list.

Step4-Output Text
# Open output file
with open('out.txt','w') as out_txt:
    # For line in output list
    for line in out_l:
        # Try / Except to output all the values to the file separated by ','
        if len(line)>1:

In the last part we write to an external file using a Try/Except to avoid IndexErrors.


Power BI Service February News

  • Power BI admin role:  Power BI Admin portal will have access to tenant-wide usage metrics, and be able to control tenant-wide usage of Power BI features.
  • Power BI audit logs globally available
  • Public preview: Email subscriptions With Power BI e-mail subscriptions, you can quickly subscribe to emails of the report pages that matter most. Once subscribed, Power BI will regularly send screenshots of that report page directly to your inbox whenever the data changes. The image in your inbox will show up exactly as it does in Power BI, and include a link to the report where you can drill into any interesting findings.
  • New APIs available for custom visuals developers: released version 1.4 of our developer tools and custom visual APIs.
  • Real-time streaming generally available: announced the general availability of our real-time streaming feature set, which allows users to easily stream data to Power BI and announce that Azure Stream Analytics will now output to Power BI streaming datasets.


  • Push rows of data to Power BI using Flow: Simply create a Flow with the “push rows to streaming dataset” action and Flow will automatically push data to that endpoint, in the schema that you specify, whenever the Flow is triggered.
  • New Microsoft Azure AD content pack:Quickly and easily understand how your employees and partners are using Azure AD. Use that information to plan your IT infrastructure and maximize business value.


Power BI: how to unslice your data

This is a very interesting use of power bi Slicer. The original article is here. it’s writed by Art Tennick. Here the complete article:

A normal slicer can be tedious when you want to show everything apart from just one or two entries in your filtered tiles – don’t take your finger off the Ctrl key! You could always turn on Select All, then unselect the items. But you may not want Select All enabled, and it’s not available for chiclets. Or you could use Visual/Page/Report level filters, but these are not available in dashboards or publish-to-web. So you may be interested in an anti-slicer? There are many ways to do this, this is one approach.Immagine 035.png

If you want to reproduce my example you need to import DimGeography, DimCustomer, and FactInternetSales from Adventure Works DW. Check all three tables are related. Then make a copy of DimGeography in Power Query and rename it to Country. Make sure it has no relationships to any other table. The normal chiclet slicer is based on DimGeography, the two chiclet anti-slicers are based on Country. All three use the EnglishCountryRegionName column. Finally, add the DAX measures shown below (the last three are optional) and build the three tiles (as per the screenshots, and use EnglishCountryRegionName from the DimGeography table not the Country table):

Sales = SUM(FactInternetSales[SalesAmount])

Anti-Sales (single) = CALCULATE([Sales], FILTER(DimGeography, ‘DimGeography'[EnglishCountryRegionName] <> VALUES(Country[EnglishCountryRegionName])))

Number countries to show = COUNTROWS(ALL(‘Country'[EnglishCountryRegionName])) – COUNTROWS(VALUES(Country[EnglishCountryRegionName]))

Anti-Sales (multiple) = IF([Number countries to show] = 0, CALCULATE([Sales]), CALCULATE([Sales], EXCEPT(VALUES(DimGeography[EnglishCountryRegionName]), VALUES(Country[EnglishCountryRegionName]))))

Slicer value/s = CONCATENATEX(VALUES(DimGeography[EnglishCountryRegionName]), DimGeography[EnglishCountryRegionName], “, “)

Anti-slicer single value = IF(NOT(ISBLANK([Anti-Sales (single)])), CONCATENATEX(VALUES(Country[EnglishCountryRegionName]), Country[EnglishCountryRegionName], “, “))

Anti-slicer multiple value/s = IF(NOT(ISBLANK([Anti-Sales (multiple)])), CONCATENATEX(VALUES(Country[EnglishCountryRegionName]), Country[EnglishCountryRegionName], “, “))

Power BI Mobile news February 2017

Today Microsoft has released a new list of feature about Power Bi Mobile application. Here’s the complete list of recent updates:

  • SSRS Authentication using Active Directory Federation Services, ADFS Preview: With single sign-on, you only have to sign in once with your organizational identity to explore all of your SSRS mobile reports and KPIs.
  • Load more than 100 rows in tables and matrices: If you have a large table or matrix on your dashboard or report, report will show as much data as possible in the tile
  • New and improved Annotate and share insights instantly: The new share and annotate capability has an improved menu, making it easier and quicker to annotate and share insights with your colleagues. Additionally, you can also share an annotated report or directly from the Power BI app.
  • Phone report – general availability: With phone reports, you can specifically tailor a portrait view of your existing report on Power BI Desktop for mobile viewers.


Run stored procedures with report data as input parameters

How can I run stored procedures with report data as input parameters?

Stored procedures are one of SQL’s most powerful tools to update, insert or delete data in your database. Since most reports are designed to provide Business Intelligence to users, there could be an action the user needs to take based on the presented data. Wouldn’t it be great if the user could take that action without leaving the report?
This post describes how to run a stored procedure directly from your report with row data as input parameters.
I will use a simple example with a custom table and a small stored procedure so you get the picture. Soon you will discover that with these steps and a creative mind, the options are near limitless for taking direct actions in your database! Think about checklist reports or scheduling reports.
1) Create a custom table in your database (you can also use existing tables if you know what you’re doing)
Create TABLE [dbo].[AdvancedSSRS_CusOrd]
([ID] [int] IDENTITY(1,1) NOT NULL,
[Account] [Varchar](50) NOT NULL,
[OrderNo] [Varchar](50) NULL
2) Create a stored procedure that Inserts new records in the table created in step 1
SET ansi_nulls ON 

SET quoted_identifier ON 

CREATE PROCEDURE Advancedssrs_insertintoadvancedssrs_cusord 
@Account VARCHAR(50), 
@OrderNo VARCHAR(50 

      SET nocount ON; 

      –Check if record already exists 
                 FROM   advancedssrs_cusord 
                 WHERE  account = @Account 
                        AND orderno = @OrderNo) 
        — if exists 
            UPDATE advancedssrs_cusord 
            SET    account = @Account, 
                   orderno = @OrderNo 
            WHERE  account = @Account 
                   AND orderno = @OrderNo 
      — if record is new 
            INSERT INTO advancedssrs_cusord 
            VALUES      (@Account, 

Since I don’t want to get multiple records of the same Customer/Order combination I included a check to see if the record exists. If it does, the stored procedure will overwrite the row with the same values. If it doesn’t, the stored procedure will add an extra row.
3) Add report parameters that will serve as input parameters for the stored procedure
In this example I want to mark some orders as “special”, so I will add a parameter for @Account and @OrderNo. Make sure to allow blank values.
4) Add a dataset that exectutes the stored procedure created in step 2
The dataset checks if the report parameter(s) are not NULL (which is how the report runs by default). Only if the parameters have values, the stored procedure will be executed.
If you use data from your custom table in your report, you want to make sure this dataset is the first dataset the report will run. Unfortunately you cannot move the position of the dataset, so if you have an existing dataset, copy the query, delete the dataset and recreate it.
5) Add a column in your Tablix to launch the stored procedure
Insert text or an image and go to its properties. Browse to the Action tab. Choose to “Go to report” and choose the report you are working on. Add parameters to pass on so you pass the value in a row to the report parameters.
You see where this is going? Once the user cicks the image/text, the report will be launched again (refreshed) but this time the parameters are not blank. This will trigger the dataset created in step 4 to exectute the stored procedure. In my scenario a row will be inserted in the table created in step 1.
6) Test your report
7) Hide the report parameters

Use Change Tracking on SQl server

Here, we will explain change tracking functions, show code examples and demonstrate how to read the Change Tracking results

Change tracking functions

There is no out-of-the-box option to see the change tracking information. To see the rows that were changed and change details, use change tracking functions in T-SQL queries [1]

The CHANGETABLE(CHANGES) function shows all changes to a table that have occurred after the specified version number. A version number is associated with each changed row. Whenever there is a change on a table where Change tracking is enabled, the database version number counter is increased

The CHANGETABLE (VERSION) function “returns the latest change tracking information for a specified row“ [2]

SELECT * FROM CHANGETABLE(CHANGES <table_name>, <version>) AS ChTbl

Note that the table used in the CHANGETABLE function has to be aliased

Table changes that have occurred after the specified version number

The CHANGE_TRACKING_CURRENT_VERSION() function retrieves the current version number, i.e. the version number of the last committed transaction


Returns NULL if Change tracking is not enabled, an integer otherwise. The minimal value returned is 0. In the example above, it returns 17

The CHANGE_TRACKING_MIN_VALID_VERSION() function shows the minimum version number that can be used to get change tracking information for the specified table using the CHANGETABLE function

SELECT MinVersion = 

In the example above shows 14

The CHANGE_TRACKING_IS_COLUMN_IN_MASK function shows whether the specific column was updated or not. If it was updated, the value is 1, otherwise 0. It can only be used if the TRACK_COLUMNS_UPDATED parameter for enable change tracking on a table is set to ON

Reading the Change Tracking results

Here’s an example for the data changes executed on the Person.Address table

  1. Execute
    SELECT * FROM Person.Address;

    The Change Tracking results show that this is the first version of the tracked table and the current records in the Person.Address table

    Change tracking results - the first version of the tracked tables

  2. Modify the records in the Person.Address table, either using T-SQL or editing rows in the SQL Server Management Studio grid. The changes I made are highlighted – I updated the rows with AddressIDs 1, 5 and 2, in that order

    Modifying records using T-SQL or editing rows in SSMS

  3. I added a row. Note that the AddressID is 32522

    Row is added into a table

  4. I deleted the row I added in the previous step
    DELETE Person.Address WHERE addressid = 32522;
  5. To read the Change Tracking results, execute
    FROM CHANGETABLE(CHANGES Person.Address, 1) AS ChTbl;

The results are:

Showing current results

The values shown in the ChOp column indicate the changes made. ‘U’ stands for update, ‘D’ for delete, ‘I’ for insert. There are three updates on the rows with AddressID 1, 2, and 5 and deletion of the row with AddressID = 32522. There is no clear indication that the 32522 row was first inserted, but according to the Change Creation Version (ChCrVer) and Change Version (ChVer) values 5 and 6, there were 2 changes. The second one was a delete, but we don’t know what the first one was

I re-inserted the same 32522 row and refreshed the results

Re-inserting the same row and refreshing the results

As expected, the current version number is 7, increased by 1 as there was one more change. But the information about the 32522 row is even vaguer when it comes to row history

Tracking individual column updates

If you add the SYS_CHANGE_COLUMNS column to the query, you will get the binary number of the column that was changed. The value is NULL only if the column change tracking option is not enabled, or all columns expect the primary key in the row were updated

Showing binary number of the changed column

“Column tracking can be used so that NULL is returned for a column that has not changed. If the column can be changed to NULL, a separate column must be returned to indicate whether the column changed.” [2]

To present column changes in a more readable format, use the CHANGE_TRACKING_IS_COLUMN_IN_MASK function. It has to be called for each column individually. In the following example, I’ll check whether the columns AddressLine1 and AddressLine2 have been modified

    (COLUMNPROPERTY(OBJECT_ID('Person.Address'), 'AddressLine1', 'ColumnId')
    (COLUMNPROPERTY(OBJECT_ID('Person.Address'), 'AddressLine2', 'ColumnId')

Using column tracking to change the column

The value 1 in the AddLine1_Changed and AddLine2_Changed columns indicates that the specific column has been changed

As shown, SQL Server Change Tracking is a synchronous process that can be easily configured on your tables. It is supported in all SQL Server editions, so there are no additional licensing expenses. It can be utilized in applications designed for one-way and two-way data synchronization, as it can seamlessly synchronize several databases, each at a different time

The Change Tracking feature is not designed to return all information about the changes you might need, it’s designed to be a light auditing solution that indicates whether the row has been changed or not. It shows the ID of the row changed, even the specific column that is changed. What this feature doesn’t provide are the details about the change. You can match the change information to the database snapshot and the live database to find out more about the changes, but this requires additional coding and still doesn’t bring all the information that might be needed for auditing

Change tracking doesn’t answer the “who”, “when”, and “how” questions. Also, if there were multiple changes on a specific row, only the last one is shown. There is no user-friendly GUI that displays the results in just a couple of clicks. To see the change tracking records, you have to write code and use change tracking functions

The execution of the SELECT statements and database object access is not tracked. These events have nothing to do with data changes, but as SQL DBAs request these features when it comes to auditing, it should be mentioned