![]() There isnt' anything we can really do about either of these.įirst of all thanks for bugging Amazon about thisīasically, it's all up to them at this point. We have too many people using it and we're in development status. The more users using it concurrently the worse it gets (which is why powerpad was able to get good speeds and most of our users can't). ![]() While we're stuck on development status, our product's connection to Amazon Cloud Drive is severely throttled. I can't speak to why they decided to demote us, but it really pulls the carpet out from under us. which is ironic, as you've probably seen that we require authorization from Amazon Cloud Drive. I suspect, in part because we weren't able to identify which user it was (there was some back and forth between alex and Amazon). That was the issue that caused them to demote us from production status to development status. The user was using 50MB/s (400mbps) of bandwidth. Maybe amazon wants to limit the bandwidth depending on the type of file since they want to read the header and flag them, it wouldnt be possible to add a header to the chunks specifying the type? just if its required for them to know the type - while encrypted they cant read the contents anyway, which is the import thing for privacyĪlex (the developer) is the one that spoke with Amazon about this. Them changing the status must be some excessive use of the api, bandwidth whatever and it sounds weird they wont specify it directly. of course with chunks ot is more important to make sure that everything gets up correctly and i guess that upload verification is the way to go here. Ive experienced it several times myself - in those apps the solution is to upload agsin. with 1 mb as a limit (why not let the users choose?) you are bound to get tons and tons of calls.Ībout files that disappear i know that happens on both netdrive, expandrive and even amazons own desktop app sometimes, so it is an error on their side. but imagine 15 gb files or more in 1 mb chunks, they could be 5 mb or 10 mb chunks. You can increase this when creating a drive, if you want. How big are you image files? Music files? Documents?Īnd when creating a drive, you can specify the chunk size. the question is "is it worth fighting every step of the way or is it worth stepping back for a while".Īs for the API. There are a number of issues, not just one or two, here. and flagged as such, it SHOULD NOT BE HANDLED this way. so that any data (regardless of how we classified it) was getting read, the first section analyzed and files tagged as video/pictures/music based on this "header information". And it is why we had enabled "upload verification" for Amazon Cloud Drive already. We've received very few replies from them.Īnd some of the issues are corrupted data on the server, or data just "disappearing". We've contacted them frequently about a number of issues. This is very disappointing to us, because it is definitely the cheapest solution among all of the providers. It's not usable for us, and there isn't a lot we can do. For now, it will be marked as experimental, and disabled by default.Īlex plans on doing a post about this, explaining the issue (in a bit more detail). Once we're promoted back up to Production status (which took a couple of months in the FIRST PLACE), we'll re-evaluate the provider. It's not worth the time (we've already spent a great deal of it) to try to work around Amazon Cloud Drive's issues just to have them changed again and again, right now. Right now, the internal beta builds are limited to 2 threads, because Amazon Cloud Drive is broken. I'm not 100% sure, and I think this varies per system, basically. ![]() So, we can definitely hit high speeds!Īs for performance, it's ~1mbps per thread, I think.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |