160 results found
-
In "Failed running job" email, indicate which stack and server has failed
Right now, there is no indication which stack and which server has caused a problem. This is problematic if there are multiple stacks with the same job names.
1 vote -
Allow custom Nginx configuration to live inside a repo's .cloud66/ directory
There's a lot of pesudo-version tracking stuff going on in the web interface around customizing your nginx.conf. How about just allowing users to clone the default into .cloud66/nginx.conf, and when deploying a stack, you can use it if present?
Our team would definitely like to have this configuration versioned alongside the rest of our app.
23 votes -
It would be great if Cloud66 would remove the old server entry from Logentries when rescanning logs and adding a new server entry.
It would be great if Cloud66 would remove the old server entry from Logentries when rescanning logs and adding a new server entry.
21 votesWe have modified our Logentries re-scanning procedure, to append to existing logs for a host instead of creating a new host. This should also solve your issue.
-
Zero downtime stack fail/switch-over by "stealing" load balancer
When cloning and switching traffic to a new stack, there is unavoidable downtime as we wait for DNS to propagate. What if there were a feature that would allow "stealing" a load balancer from one stack and have it point to another?
The way this would work is you would clone your new stack and get it ready to go live. Once ready, you would have cloud66 redirect the old load balancer traffic to the new stack. Traffic would instantly switch from the old to the new stack.
To do this safely you would probably still have a bit of…
1 voteThis can be accomplished using a failover group::
https://help.cloud66.com/docs/failover-groups/failover-groups
Downtime can also be minimized using database replication across applications.
https://help.cloud66.com/docs/databases/database-replication#for-multiple-applications
-
Keep failed deploys around
We play around with deploy hooks a lot for things like non-standard asset pipelines etc. Often this fails on deploy. It'd be nice if that folder was kept around so we can try to reproduce it and see what's failing. Maybe moving the failed deploy into a $STACKPATH/faileddeploys folder or something? EngineYard does this
1 voteFailed deployments are kept in a directory on the server. The directory for each failed deployment can be found at the bottom of the failed deployment log.
-
Ability to choose the drive type and size when creating a cloud stack
For instance, choose to have an SSD drive of 50 GB at Google Compute Cloud instead of having the default Standard Persistent Disk of default 160 GB without an ability to change it.
36 votes -
parallel migration deploys should wait until migrations end
Parallel migrations can potentially cause issues during deployment, even for changes that aren't destructive (add column/table), since code which accesses those columns/tables could be deployed before the migration is run.
To fix that parallel deploys should either:
a) wait for the first server to finish migrations before the other servers deploy.
2) start deploys, but only change symlinks and restart after the migrations have finished.6 votes -
allow deployment based on git SHA
Instead of just picking a branch, being able to specify the exact sha to deploy would be beneficial.
3 votesToolbelt now supports deployment from a git ref: http://help.cloud66.com/toolbelt/toolbelt-stack-management
-
Allow using custom ENV_VARs between stacks
Use case: A stack with an api application defines its api key in the envvars. Other stacks consuming the api could set the api key by just referencing the api-stacks envvar.
1 voteThis is possible. An example can be found here: http://community.cloud66.com/articles/sharing-a-database-between-stacks
-
22 votes
-
2 votes
-
Ability to generate manifest.yml from an existing stack
Would allow a user to get the stack close via the web UI, but then download the manifest, add to source control, and tweak stack in the manifest file.
10 votesThis is possible through toolbelt stacks configure commands
-
Ability to scale Elasticsearch
Elasticsearch without the ability to scale to more than 1 node is ok for development, but any serious production environment needs to shard/replicate data to an elasticsearch cluster. Elasticsearch itself has great support for autoscaling.
BTW: Running ES on OpenJDK is really not a good idea.
188 votesRelevant help page can be found at http://help.cloud66.com/database-management/elasticsearch-scaling
-
17 votes
-
deploy progress bar
Show a deploy progress bar that estimates time left and/or a % deployed so far.
6 votesProgress bars have been added to the UI.
-
Global hipchat setting
Would it be possible enter hipchat key and room in a global settings somewhere, for example in 'Account' and then just enable all notifications for hipchat for when a new stack is created.
It's a laborious task searching for the api key and room token each time and then having to enable hipchat for all of the notifications.
1 voteYou can copy notifications from one stack to another. Of course this will not help when the notification setting doesn’t exist on any stack.
-
Better email subjects for failures.
If I'm glancing at my email, the subject line for a failed and success is identical up until the very last word.
Perhaps a better subject for a failed deploy would be: "[Cloud 66] Failed redeploy to "Stack Name""
That would distinguish it, and put the most important idea first and foremost.
Thanks!
1 vote -
Add support to Microsoft Azure
Microsoft has a very good deal for startup with its Bizspark program. It gives $150 of credit each month to be used on Azure. It would be great if you support it like you do with Amazon, Digitalocean, etc
35 votes -
Create a way to close an account
Please close my account. Username is: carybriel@gmail.com. Please advise how to do this. Currently, when I log in, I'm locked into a payment screen and can't even get to support. Thanks.
1 vote -
Restart webserver on security updates
The webserver should be restarted to take advantage of the new packages that might be updated. For example CVE-2014-0160 was not fixed until I manually restarted nginx.
1 vote
- Don't see your idea?