<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Posts on blog.her.se</title>
    <link>https://blog.her.se/post/</link>
    <description>Recent content in Posts on blog.her.se</description>
    <generator>Hugo -- gohugo.io</generator>
    <copyright>Copyright © 2008–2018, Steve Francia and the Hugo Authors; all rights reserved.</copyright>
    <lastBuildDate>Fri, 17 Feb 2023 00:00:00 +0000</lastBuildDate><atom:link href="https://blog.her.se/post/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Synapse Serverless SQL - Access tables and views without Storage Access</title>
      <link>https://blog.her.se/post/synapse-serverless-sql-access-tables-without-storage-or-synapse-ws-access/</link>
      <pubDate>Fri, 17 Feb 2023 00:00:00 +0000</pubDate>
      
      <guid>https://blog.her.se/post/synapse-serverless-sql-access-tables-without-storage-or-synapse-ws-access/</guid>
      <description>
        
          
            Problem statement  I want to give end users access to my synapse serverless tables and views. I don&#39;t want to give the end users access to the Synapse Workspace. I don&#39;t want to give the users access to the storage account that are hosting the data (delta tables). I want to use Azure Active Directory (AAD) group to manage the access.  Inspiration to the solution I found is documented here: See blog: https://www.
          
          
        
      </description>
    </item>
    
    <item>
      <title>Azure static web apps with Hugo</title>
      <link>https://blog.her.se/post/obsidian-hugo-azure-static-web-app/</link>
      <pubDate>Wed, 23 Nov 2022 00:00:00 +0000</pubDate>
      
      <guid>https://blog.her.se/post/obsidian-hugo-azure-static-web-app/</guid>
      <description>
        
          
            Obsidian + Hugo + Azure static app = True     My needs I usually document things using markdown in Obsidian. There are many articles about Obsidian so I won&#39;t cover that here. Often I need to share my markdown notes with others. Obsidian has a lot of plugins to export notes in different formats like .docx, RTF, PDF and markdown. That&#39;s fine when sharing with a few recipients, but when you want to share with many a browser based sharing is better.
          
          
        
      </description>
    </item>
    
    <item>
      <title>Shared External Hive Metastore with Azure Databricks and Synapse Spark Pools</title>
      <link>https://blog.her.se/post/shared-external-hive-metastore-with-azure-databricks-and-synapse-spark-pools2/</link>
      <pubDate>Tue, 09 Nov 2021 00:00:00 +0000</pubDate>
      
      <guid>https://blog.her.se/post/shared-external-hive-metastore-with-azure-databricks-and-synapse-spark-pools2/</guid>
      <description>
        
          
            Shared External Hive Metastore with Azure Databricks and Synapse Spark Pools Learn how to setup a shared external Hive metastore to be used across multiple Databricks workspaces and Synapse Spark Pools (preview)
    Image by Tumisu on Pixelbay
1 Background To help structure your data in a data lake you can register and share your data as tables in a Hive metastore. A Hive metastore is a database that holds metadata about our data, such as the paths to the data in the data lake and the format of the data (parquet, delta, CSV, etc).
          
          
        
      </description>
    </item>
    
    <item>
      <title>How to visualize your nested IoT data in 3d using Spark and Power BI</title>
      <link>https://blog.her.se/post/how-to-visualize-your-nested-iot-data-in-3d-using-spark-and-power-bi/</link>
      <pubDate>Sat, 09 Jan 2021 00:00:00 +0000</pubDate>
      
      <guid>https://blog.her.se/post/how-to-visualize-your-nested-iot-data-in-3d-using-spark-and-power-bi/</guid>
      <description>
        
          
            Using Azure Databricks, Spark, Python and Power BI python script visuals     Image by author
A. Introduction Every now and then, you run into new unique problems to solve. This time it was a client getting nested IoT data. Storing and visualizing IoT data is usually a standard task, but getting nested IoT data as a &amp;quot;matrix&amp;quot; per message with corresponding vectors is not as straightforward as usual.
          
          
        
      </description>
    </item>
    
    <item>
      <title>Partition overwriting using parquet and Databricks</title>
      <link>https://blog.her.se/post/partitionoverwriteorreplace/</link>
      <pubDate>Sun, 15 Mar 2020 00:00:00 +0000</pubDate>
      
      <guid>https://blog.her.se/post/partitionoverwriteorreplace/</guid>
      <description>
        
          
            Note about parquet and updating tables Nowadays many companies are using the delta format (if they use Databricks) when they have data in the laje that needs to be updated.
This notebook shows what you had to do before using the delta format. How you needed to manage your update strategy like:
 Rewriting the full table Rewriting selected partitions manually Rewriting partitions dynamically  When new technology arrives, like delta, it is good to understad some of the problems or challenges that the new technology solves.
          
          
        
      </description>
    </item>
    
  </channel>
</rss>
