Established b2b company with local, COLO, cloud and virtualized services needs help on a special AWS project.
We need a multi-step utility/development project to utilize a few AWS services as part of a collective “project”.
What We Currently Do:
-We have hundreds of data partners. They provide us API endpoint access to download updated values/data.
- We invoke their API in a .NET solution.
- Response data is saved locally.
- We have to run multiple API GET’s, liken to looping through page=1, page=2 sometimes.
- Requires a good bit of computing power and resources, especially now that we are scaling.
What We Want:
- We want to invoke one (1) AWS API. By invoking this one API Endpoint all further processing will be done in AWS and return a “job status” code or results/total successful API’s, etc.
- We want to pass to AWS API a list of all API URL’s and/or path-variables and/or query string parameters.
- This batch list of urls that need to be invoked will be sent somehow to AWS, such as in a HTTPWebRequest Body (JSON).
- AWS wil have to “loop through” or “process” list of URL’s. I’m not sure if this is Lambada or Batch, there is no preference except light-weight and serverless.
- After AWS API returns a response (which is always in JSON format), we want each response saved as a .json file in an AWS S3 Bucket.
- A method to easily query/index S3 Bucket contents. Some buckets will have millions of files. We need to be able to quickly determine if a filename exists (CloudSearch?)
- You will have to create AWS API Gateway and simply the process for passing querystring parameters for each URL (example: zip = 24018, pagenum = 1)
- You will have to create AWS S3 Bucket and consider permissions, access, etc.
- You will have to create some sort of “job” or “batch” fuction that will tell AWS API to update each URL in the .json body.
- You will have to provide a simple and extremely responsive/quick way to return S3 Bucket file search properties.
- You will have to create a new IAM user just for this project.
- You must lock down the IAM user to only these resources.
- You must limit the region for the IAM user to have access to only US-East1.
- The IAM user will have rights to invoke AWS API, create/manage/run Lambda Functions and/or AWS Batch Jobs and S3 Bucket Read/Write permissions for the buckets that only pertain to this project.
- Attached is a basic visual diagram of the flow/goal of thisi project. This is not meant to be static nor correct. We need help designing the most efficient and cost effective way to accomplish the goals outlined in the diagram.
- It is not mean to be “accurate”, rather an overview of what needs to happen and the net result of how to save the URL response data into S3 Buckets.
10 freelanceria on tarjonnut keskimäärin %project_bid_stats_avg_sub_26% %project_currencyDetails_sign_sub_27% tähän työhön
Hi nice to meet you thanks for visitting my profile. I think you must do it by batch job. because the lambda has limition for 300s with more than 9 year in aws. So I can help you. can we talk?
Hello. I have extensive experience in AWS Lambda, S3 bucket and Api Development. I have done a lot of AWS project and Lambda Projects. I will provide good result. thank you