How to Fix Slow SharePoint List Refresh in Power BI
- Vojtěch Šíma
- Dec 24, 2025
- 14 min read
tl;dr for lists with only primitive types (Text, Numbers, etc.), stick to Implementation 2.0 and disable any counts/totals on the source. However, if you need the absolute maximum speed or have complex columns, call the SharePoint REST API directly and set the pagination limit to 5000. Read the full article below for the details.
SharePoint List connector implementations in Power Query (1.0 vs 2.0)
When you want to natively connect to a SharePoint list in Power Query, in the GUI, you choose either a SharePoint or SharePoint Online list option. They are identical, as to which background M function will be called; the only difference is the options that you use when calling it.
The function is called SharePoint.Tables() and their options are:
ApiVersion
Currently 14, 15, or "Auto". If you do not specify it, it defaults to 14. Microsoft also notes that non-English SharePoint sites require at least 15.
Implementation
Currently null or "2.0". This controls whether you are using the original connector behaviour or the newer 2.0 implementation.
ViewMode
Currently "All" or "Default" (only valid for Implementation "2.0").
All gives you all user columns plus system columns. Default matches whatever view is set as Default in SharePoint, so column add/remove changes in that view propagate into the connector.
There is also DisableAppendNoteColumns, but the GUI does not really push you into using it. It just stops the connector from using a separate endpoint for note columns.
And underneath all of it, you are talking to SharePoint REST endpoints.
The general rule is that the original implementation may be safer, but slower, and you can't get friendly names straight up. Therefore, Implementation 2.0 with Default ViewMode can be a good alternative; however, Microsoft notes a couple of issues with it. The major one is covered in the next paragraph. So try it yourself first and then decide.
If you want a bug-free and super-fast version with a bit of code, keep reading.
Fix /RenderListDataAsStream (500) Internal Server Error in Power BI
DataSource.Error: Web.Contents failed to get contents from 'https://daatlers.sharepoint.com/sites/Experiments/_api/Web/Lists(guid'10f52422-d47d-475e-ad30-60e906dc1a6e')/RenderListDataAsStream' (500): Internal Server ErrorIf you decided to use Implementation 2.0, you may encounter this error, even tho you did everything correctly.
Most likely, the issue will appear when you have a list of over 5000 items, when you have more than 12 default columns (or joining many columns) or when you use Totals in one of your columns.
In my experience, fixing the Totals usually did the trick. If you don't know what I mean by Totals, look at this picture:

When you see this count below your list, it can actually cause the issue. To fix this, do the following:

Scroll down until you see these options:

Expand Totals and check for fields, if any of those have an aggregation, like 'Count', change it to None.

Scroll back on top (or bottom) and don't forget to click OK.

Why is the SharePoint list refresh slow in Power BI
Now, let’s talk about actual optimization and not just solving obstacles so the connector works.
The main reason the SharePoint list refresh tends to be slow is pagination, or more precisely, how many pages the “native clicking” solution ends up pulling. The classic list items endpoint returns the first 100 items by default, so if you just let it page, you can end up with a ton of requests. The Implementation 2.0 should have a page size of around 1000 or 2000.
The custom connector, which we will write later, can go up to 5000 items per page. Therefore, we should achieve a greater speed of refresh.
If you're not sure what pagination is, in short, it's the process of dividing larger content into smaller chunks, called pages. This can improve performance when you view just a subset of data, and can also help the server breathe easier.
However, when you ask for all the items, dividing the results into too many pages means more requests, and therefore more time to process each of the requests. Therefore, you generally have to find a balance between too many requests and too large subsets.
Generally, if you access third-party APIs supporting paging, you will usually have a maximum limit of items you can request per page. Often, this maximum value can be the fastest.
How to build a faster SharePoint connector in M language
Even though this is probably not a beginner-level solution, I’ll give you an easy-to-copy function(s), you can just call and be done with it.
The nice part is that you do not need to worry about an extra authentication layer for the REST API. Since we are not using POST, we can leverage the same OAuth2 (Organizational Account) sign-in you already use when connecting to SharePoint.
We’ll build the whole thing with Web.Contents and a simple List.Generate loop.
Let's start by defining parameters and the functions.
Host as Parameter
https://<your tenant>.sharepoint.com

List site or team as a Parameter (Managed path)
site/SiteName/ or teams/teamName/

Get List Id
To get data from the list, we need its ID, which is not easy to spot simply from the URL, so we can call a function to get it for us.
All you need to know is its Title, and you can move with that.
let
getListId= (listTitle as text) as text =>
Json.Document(
Web.Contents(
host,
[
RelativePath = siteName & "_api/Web/Lists",
Headers = [Accept = "application/json"],
Query = [
#"$select" = "Title, Id",
#"$filter" = "Title eq '"& listTitle &"'"
]
]
)
)[value]{0}[Id]
in
getListId
// expected value format: 10f52422-d47d-475e-ad30-60e906dc1a6eI recommend doing this one time and then saving the result manually inside a parameter. This will then solve issues when you would rename your list.

Get List values
To get the list values, let’s first define a function that loads a single page. Then we’ll add a pagination mechanism that pulls all pages, and therefore all values.
Later, I’ll show you how to optimize it with query parameters, and also the process of getting friendly names.
getListPage = (siteName as text, listId as text) =>
Json.Document(
Web.Contents(
host,
[
RelativePath = siteName & "_api/Web/Lists(guid'" & listId &"')/items",
Headers = [Accept = "application/json"],
Query = [#"$top"="5000"]
]
)
),
getPage = getListPage( siteName, listId )This is a simple request that gets you the first 5,000 items from your list. If you have a tiny list and you know it will not go over 5,000 items, you can stop here.
But in most cases, you will end up with more items later, so let’s future-proof it now. Let’s build the paginator, so we cover all pages.
Please note that I will also modify the original function, as the paging requires a tiny bit different behaviour
getListPage = (optional relativePath as nullable text, optional siteName as nullable text, optional listId as nullable text) =>
Json.Document(
Web.Contents(
host,
[
RelativePath = relativePath ?? siteName & "_api/Web/Lists(guid'" & listId &"')/items?$top=5000",
Headers = [Accept = "application/json"]
]
)
),
paginator =
List.Generate(
() => [
request = getListPage(null, siteName, listId),
next = request[#"odata.nextLink"]?,
index = 0
],
each [next]? <> null or [index]=0,
each [
request = getListPage( Text.AfterDelimiter(next, (host & "/") ) ),
next = [request]?[#"odata.nextLink"]?,
index = [index]+1
],
each [request]?[value]?
),
combinePages = List.Combine(paginator)If you don't care about metada, change Accept header to: [Accept = "application/json;odata=nometadata"]
Here’s the modified version with the paginator, so you always get all items.
Paging on Microsoft’s side works like this: after each request, you get a next page URL (odata.nextLink). That means you have to send a brand new request for every page, and the URL already contains everything needed to fetch the next chunk.
That’s why I rebuilt the single-page request into getListPage, so it accepts relativePath as the first parameter and makes the rest optional. For the first call, we pass relativePath = null, so it falls back to the original request with ?$top=5000. For the next pages, we only pass the relative path extracted from odata.nextLink.
Lastly, there’s a cheap little check for the single-page case. That’s what index is for. It makes sure the generator runs at least once, and it is also handy for quick debugging.
Also, you can see I’m doing some delimiter magic when building the next request. That’s because we have to split the host from the relative path to keep the query refreshable in the Power BI Service.
This is one downside of Microsoft’s paging implementation: we have to assume the next link always stays on the same host and is not some random mirror. The host you use for authentication cannot be dynamic, so you cannot just feed the full next URL into Web.Contents and hope the Service will like it.
Get List values with friendly names, expansion and dynamic typing
I wrote the first version a couple of days ago, but then realized that this solution (which would be coming next) breaks easily with field types requiring expansion and subfield selection, such as User. This made me rethink the whole approach. I thought I could slowly increase the level of complexity, but I realized I had to do it all in one go, as partial solutions didn't make sense. So, here is a solution that lets you define friendly column names. It gets the internal names and types, builds a dynamic schema using those friendly names, and retypes every main column. The output is an expanded table with types and friendly column names
As I mentioned earlier, get ready for what is probably overkill. But hey, it's way faster than the regular clicking solution.
I will first introduce a couple of limitations so you can decide if this suits you. Then, I'll paste the entire code as a series of steps (rather than a single function) so you can inspect and modify it more easily. Finally, I'll break down the main parts.
Limitations & Workarounds
URL Length: If you request too many fields, you might hit the max URL character limit. In that case, comment out the field query parameter, fetch everything first, and then apply the filter on the already fetched data.
Expansion: Types requiring expansion are limited to Id and Title. You can always modify sharePointTypeProfile to add more subfields; the script will expand them seamlessly.
Structured Data: Structured results are neither typed nor expanded. If the host returns a record or list, the column will remain formatted as a Record or List. Since I don't know what you want to do with that data, I kept it raw.
Metadata: I don't return metadata. If you need it, just modify the Accept header.
Parameter Prerequisite: If you jumped straight here and didn't read anything before, note that this code requires the parameters listed above, such as host, siteName, and listId.
Friendly Names Driver: I built this to accept friendly names (what you see in the browser). This is generally safe since column names should be unique, even though they aren't the InternalName. I did it this way because Internal Names are annoying to get from the UI. If you run into errors, check the actual properties via the Fields endpoint and then modify your list.
let
defaultNoTransformation = (x as any) => x,
defaultRecordTransformation = (x) => if x is text then Json.Document(x) else if x is record then x else null,
defaultTypeRecord = [Type = type any, ReTyper = defaultNoTransformation, ToExpand = false, SelectSubFields = {}, Optional = false],
sharePointTypeProfile = [
// TEXT TYPES
Text = [Type = type nullable text, ReTyper = Text.From, ToExpand = false, SelectSubFields = {}, Optional = false],
Note = [Type = type nullable text, ReTyper = Text.From, ToExpand = false, SelectSubFields = {}, Optional = false],
Choice = [Type = type nullable text, ReTyper = Text.From, ToExpand = false, SelectSubFields = {}, Optional = false],
Guid = [Type = type nullable text, ReTyper = Text.From, ToExpand = false, SelectSubFields = {}, Optional = false],
Computed = [Type = type nullable text, ReTyper = Text.From, ToExpand = false, SelectSubFields = {}, Optional = false],
Calculated = [Type = type nullable text, ReTyper = Text.From, ToExpand = false, SelectSubFields = {}, Optional = false],
File = [Type = type nullable text, ReTyper = Text.From, ToExpand = false, SelectSubFields = {}, Optional = false],
ContentTypeId = [Type = type nullable text, ReTyper = Text.From, ToExpand = false, SelectSubFields = {}, Optional = false],
ThreadIndex = [Type = type nullable text, ReTyper = Text.From, ToExpand = false, SelectSubFields = {}, Optional = false],
Threading = [Type = type nullable text, ReTyper = Text.From, ToExpand = false, SelectSubFields = {}, Optional = false],
WorkflowStatus = [Type = type nullable text, ReTyper = Text.From, ToExpand = false, SelectSubFields = {}, Optional = false],
OutcomeChoice = [Type = type nullable text, ReTyper = Text.From, ToExpand = false, SelectSubFields = {}, Optional = false],
CrossProjectLink = [Type = type nullable text, ReTyper = Text.From, ToExpand = false, SelectSubFields = {}, Optional = false],
// LIST TYPES
MultiChoice = [Type = type nullable list, ReTyper = defaultNoTransformation, ToExpand = false, SelectSubFields = {}, Optional = false],
TaxonomyFieldTypeMulti = [Type = type nullable list, ReTyper = defaultNoTransformation, ToExpand = false, SelectSubFields = {}, Optional = false],
// INTEGER TYPES
Integer = [Type = type nullable number, ReTyper = Number.From, ToExpand = false, SelectSubFields = {}, Optional = false],
Counter = [Type = type nullable number, ReTyper = Number.From, ToExpand = false, SelectSubFields = {}, Optional = false],
ModStat = [Type = type nullable number, ReTyper = Number.From, ToExpand = false, SelectSubFields = {}, Optional = false],
WorkflowEventType = [Type = type nullable number, ReTyper = Number.From, ToExpand = false, SelectSubFields = {}, Optional = false],
PageSeparator = [Type = type nullable number, ReTyper = Number.From, ToExpand = false, SelectSubFields = {}, Optional = false],
// NUMBER TYPES
Number = [Type = type nullable number, ReTyper = Number.From, ToExpand = false, SelectSubFields = {}, Optional = false],
Currency = [Type = type nullable number, ReTyper = Number.From, ToExpand = false, SelectSubFields = {}, Optional = false],
GridChoice = [Type = type nullable number, ReTyper = Number.From, ToExpand = false, SelectSubFields = {}, Optional = false],
AverageRating = [Type = type nullable number, ReTyper = Number.From, ToExpand = false, SelectSubFields = {}, Optional = false],
RatingScale = [Type = type nullable number, ReTyper = Number.From, ToExpand = false, SelectSubFields = {}, Optional = false],
// LOGICAL TYPES
Boolean = [Type = type nullable logical, ReTyper = Logical.From, ToExpand = false, SelectSubFields = {}, Optional = false],
Attachments = [Type = type nullable logical, ReTyper = Logical.From, ToExpand = false, SelectSubFields = {}, Optional = false],
Recurrence = [Type = type nullable logical, ReTyper = Logical.From, ToExpand = false, SelectSubFields = {}, Optional = false],
AllDayEvent = [Type = type nullable logical, ReTyper = Logical.From, ToExpand = false, SelectSubFields = {}, Optional = false],
// DATETIME TYPES
DateTime = [Type = type nullable datetimezone, ReTyper = DateTimeZone.From, ToExpand = false, SelectSubFields = {}, Optional = false],
// RECORD TYPES
URL = [Type = type nullable record, ReTyper = defaultNoTransformation, ToExpand = false, SelectSubFields = {}, Optional = false],
// EXPANDABLE TYPES
Lookup = [Type = type record, ReTyper = defaultNoTransformation, ToExpand = true, SelectSubFields = {"Id", "Title"}, Optional = false],
LookupMulti = [Type = type nullable list, ReTyper = defaultNoTransformation, ToExpand = true, SelectSubFields = {"Id", "Title"}, Optional = false],
User = [Type = type nullable record, ReTyper = defaultNoTransformation, ToExpand = true, SelectSubFields = {"Id", "Title"}, Optional = false],
UserMulti = [Type = type nullable list, ReTyper = defaultNoTransformation, ToExpand = true, SelectSubFields = {"Id", "Title"}, Optional = false],
// EXPANDABLE JSON TYPES
Thumbnail = [Type = type nullable record, ReTyper = defaultRecordTransformation, ToExpand = false, SelectSubFields = {}, Optional = false],
Location = [Type = type nullable record, ReTyper = defaultRecordTransformation, ToExpand = false, SelectSubFields = {}, Optional = false],
Image = [Type = type nullable record, ReTyper = defaultRecordTransformation, ToExpand = false, SelectSubFields = {}, Optional = false],
TaxonomyFieldType = [Type = type record, ReTyper = defaultRecordTransformation, ToExpand = false, SelectSubFields = {}, Optional = false],
// FALLBACK TYPES
Default = defaultTypeRecord,
Invalid = defaultTypeRecord,
Error = defaultTypeRecord,
MaxItems = defaultTypeRecord
],
columnsWanted = {
/*
example of fields I used for testing
replace this with your own list
"friendly_id",
"friendly_column_name",
"column_hey",
"some_nice_date",
"number",
"employee",
"location",
"longText",
"choice",
"isTrue",
"lookup",
"hyperLink",
"image",
"managedData",
"Rating (0-5)",
"Author",
"Status",
"Status (Legacy)",
"O'Reilly",
"numberNoDecimalPlace",
"lookupSingle",
"taxonomySingle"
*/
},
columnsWantedFilter = Text.Combine(
List.Transform(
columnsWanted,
each "Title eq '" & Text.Replace(_, "'", "''") & "'"
),
" or "
),
fieldsWithTypes = List.Buffer(
List.Transform(
Json.Document(
Web.Contents(host, [
RelativePath = siteName & "_api/Web/Lists(guid'" & listId & "')/fields?",
Headers = [Accept = "application/json;odata=nometadata"],
Query = [
#"$select" = "InternalName,Title,TypeAsString,LookupList,LookupField,Hidden,ReadOnlyField",
#"$filter" = columnsWantedFilter
]
])
)[value],
each _ & Record.FieldOrDefault(sharePointTypeProfile, [TypeAsString], defaultTypeRecord)
)
),
fieldSelect =
let
parent = List.Combine(
List.Transform(
List.Select(fieldsWithTypes, each not List.IsEmpty([SelectSubFields])),
(parent) => List.Transform(parent[SelectSubFields], each parent[InternalName] & "/" & _)
)
)
in
Text.Combine(List.Transform(fieldsWithTypes, each [InternalName]) & parent, ","),
fieldExpand = Text.Combine(
List.Transform(
List.Select(fieldsWithTypes, each [ToExpand]),
each [InternalName]
),
","
),
fieldsForSchema = Record.FromList(
List.Transform(fieldsWithTypes, each Record.SelectFields(_, {"Type", "Optional"})),
List.Transform(fieldsWithTypes, each [Title])
),
tableSchema = type table Type.ForRecord(fieldsForSchema, false),
getListPage = (optional relativePath as nullable text, optional siteName as nullable text, optional listId as nullable text) =>
Json.Document(
Web.Contents(host, [
RelativePath = relativePath ?? siteName & "_api/Web/Lists(guid'" & listId & "')/items?" & Uri.BuildQueryString([
#"$top" = "5000",
#"$select" = fieldSelect,
#"$expand" = fieldExpand
]),
Headers = [Accept = "application/json;odata=nometadata"]
])
),
paginator = List.Generate(
() => [
request = getListPage(null, siteName, listId),
next = request[#"odata.nextLink"]?,
index = 0
],
each [next]? <> null or [index] = 0,
each [
request = getListPage(Text.AfterDelimiter(next, (host & "/"))),
next = [request]?[#"odata.nextLink"]?,
index = [index] + 1
],
each [request]?[value]?
),
combinePages = List.Combine(paginator),
reTypeFunctionsRecord = Record.FromList(
List.Transform(fieldsWithTypes, each [ReTyper]),
List.Transform(fieldsWithTypes, each [InternalName])
),
fieldOrder = List.Transform(fieldsWithTypes, each [InternalName]),
listOfValuesFromRecords = List.Transform(
combinePages,
(r) => List.Transform(
fieldOrder,
(fx) => Record.FieldOrDefault(reTypeFunctionsRecord, fx, defaultNoTransformation)(Record.FieldOrDefault(r, fx, null) )
)
),
typedTable = #table(tableSchema, listOfValuesFromRecords)
in
typedTableI don't recommend exploring the code here in the blog due to limited viewing options, best to paste it to real editor.
The Key Parts Explained
sharePointTypeProfile: This is the driver of the whole typed schema. It's a record containing a sub-record for every single Type coming from SharePoint that I could find.
Type & ReTyper: These are the most important parts; they determine what your final table looks like. ReTyper returns a function that applies the transformation for each row (simply defining the type won't necessarily retype the value). If you modify this, keep in mind it expects a function.
Custom Transformations: I pre-defined a couple of transformations for Record-like types. If you want to customize this, for example, to extract the Date from a Datetimezone for every Datetime field, edit it here.
ToExpand & SelectSubFields: These determine if the Type requires expansion (like User or Lookup). By default, I only return Id and Title, but it's super easy to modify. The subsequent functions aren't hardcoded, so they will seamlessly react to your changes (e.g., if you need EMail from a User, add it here).
columnWanted: This is the second most important part (or perhaps the first). This drives the whole query. Here, you define the Friendly Names of the columns you want. And honestly, that's kind of it; that's all you have to do to run this. Note: You don't define subfields here.
Field Fetching & OData: Once you define the columns above, the query sends a request to fetch the fields available inside the list. It gets additional info (like Type) and maps our initial TypeProfile to the results. It then builds the OData queries ($select and $expand) to create the minimal possible request to fetch your actual data.
Reordering & Retyping: The API might return results in a different order than our schema, so we need a mechanism to force the column order and retype the data.
I tested a couple of variants but decided to go with a Record-based approach.
I built a record where each field is a column name paired with a transformation function.
Then, I modify the paginated results: for each record in the list, I reconstruct the result to match the exact order and types of my schema.
The Result: The output is a table that is fully typed and dynamically changes based on the columns you provide.
If you want to go bananas, modify the query to make columnWanted a Parameter. This would allow you to add new columns straight from the Power BI Service without even opening the semantic model. Although I don't recommend it, theoretically, it should work.
One tip: if you want a list of all users as a dimension, just grab the IDs here and build the dimension separately using the _api/web/siteusers endpoint.
Benchmark
To justify all of this work, we need to see whether we will even gain some performance on refresh. Well, I tested it with approximately 20k rows. Most of them had filled only basic types, but a couple of them were filled with more expensive types such as User, Managed Data, Lookup, Pictures, Rating, Locations, and so on. All of them fetched all columns with zero transformations after table load.
The timer was done extremely simply; each query was isolated and run by itself via clicking Refresh (schema and data) in Power BI Desktop.
Implementation | Time |
Legacy implementation (ApiVersion=15) | 50 seconds |
Implementation 2 (Implementation="2.0", ViewMode="Default") | 1m 20s |
Custom solution | 9 seconds |
If I were to only select, let's say, 5 fully filled columns with basic types, it would look like this. For clicking solutions, I added a Select Columns step; for the custom solution, naturally, I defined the same columns in my columnsWanted.
Implementation | Time |
Legacy implementation (ApiVersion=15) | 42 seconds |
Implementation 2 (Implementation="2.0", ViewMode="Default") | 6 seconds |
Custom solution | 3 seconds |
If I were to reduce the noise and start calculating once I see the loading of MBs, you could subtract 2-3 seconds from each, making the custom solution basically instant. Implementation 2 also performed extremely well.
Perhaps you can see the behaviour of different types of columns and the solution; however, the custom solution, even with supposedly 'extreme' overhead, is performing very nicely.
Thank you
If you're reading this and you did actually check out the code, major props to you, I would love to hear whether it met your expectations and how the usability was. I will likely also post an updated version (after more benchmarking) to GitHub, so make sure you follow me there.



Comments