Viewed 10k times. My code for reading data from S3 usually works like this: library "aws. Improve this question. Ryan Ryan 2 2 silver badges 8 8 bronze badges. Add a comment. Active Oldest Votes. Improve this answer. Kannaiyan Kannaiyan Ulrich Beck Ulrich Beck 46 5 5 bronze badges. Sign up or log in Sign up using Google. Email Required, but never shown. The Overflow Blog. Podcast An oral history of Stack Overflow — told by its founding team.
Millinery on the Stack: Join us for Winter Summer? Bash, ! Featured on Meta. New responsive Activity page. Related Hot Network Questions. When you want to work with larger datasets inside a Shiny app, we generally recommend investigating other options, including:. In your specific case, you may also want to investigate downloading the feather file to the local machine, then importing from local, instead of over the network. Can you clarify the last piece? If I download the feather file locally, wouldn't that be the same as using output files from my Rmarkdown ETL process instead of pushing them to S3?
This means storing a copy of all my data on the server instead of elsewhere or are you saying temporarily copy it locally, read the file and remove it. Am I understanding this correctly or not? You can try the new parquet format accessible from R with the new arrow I don't know if it could help here.
Also, there is the fst format that may be competitive if you want to stay with a file format. A database would be the good fit to send you data in and retrieve only what is needed when needed. Yes, that's my suggestion. If I understand your benchmarks correctly, then reading the feather file from the local file system is relatively fast, but very slow from S3. So my suggestion is to download the file in the background, read it and discard it once done.
You should be able to do the download part asynchronously using promises , if you design the app carefully. Ben Gorman. Authenticate With aws. Sign in to the management console. Search for and pull up the S3 homepage. Next, create a bucket. Give it a unique name, choose a region close to you, and keep the other default settings in place or change them as you see fit. Now we need to create a special user with S3 read and write permissions.
0コメント