So, you changed the approach.
Better to know.
http stream, as told, is really like a file system streaming: the tool is maintaining a small cache of needed datas.
The reason is that information and assets in a PDF file might be spread in the file (not contiguous or linear).
For that reason it's not possible to know in advance what the library would need to render a page.
If your files are slow to be rendered first time, it seems aren't optimized for web access: try opening your file in Google Chrome... if optimized you will se pages appearing while download is running, if not optimized you will be able to see the file only when the download is completely done.
If not optimized I suggest you to optimize your files for web.
Even if I've understood your approach, there is now way to give you a simple and running solution:
- http stream are with randomic access, not cached for security and performances reasons.
The access is randomic for 2 reasons: no way to know in advance what the user will like to read, no way to know in advance (from the client) how each page in the PDF file are structured.
- the best and less data consuming way would be to completely download the file on the storage
- otherwise, as done by other customers: split your PDFs in single paged files, download each page one by one and render them in a view pager.
Last approach shall grant your customer to:
- access as quick as possible to each page
- user could interrupt the download process not downloading completely the magazine
Cons:
- you will lose thumbnails as are rendered on the actual PDF and, from page to page in the view pager the "actual PDF file" will contain only a single page... so thumbnail will be with the same single page.