Please see http://status.cloudbees.com for status indicators and high level system status information.
For support, please visit support.cloudbees.com or email support@cloudbees.com
Thursday, 15 December 2011
Routing issues resolved
Routing issues were resolved some time ago. If you notice any unexpected failures when you deploy a new version of your app (500 errors) and your app appears "healthy" in your logs - please do raise a ticket.
Wednesday, 14 December 2011
Current intermittent issues with the cloudbees router
Affecting some RUN@cloud users. Those with SSL or dedicated proxies should be unaffected (and those who haven't deployed apps recently).
Thursday, 20 October 2011
Grand Central maintenance completed
Maintenance completed. Access should be restored and systems back to normal. Thanks for your patience.
Grand central currently being updated
Critical maintenance being performed so grandcentral.cloudbees.com may be unavailable for several minutes.
Tuesday, 20 September 2011
Intermittent logins
Currently we are working on resolving issues with intermittent login failures. Logins may briefly not work while services are being worked on (just try again shortly if you experience this).
Tuesday, 7 June 2011
Grand Central service has been restored
All systems are back to normal at this time.
Once again, thanks for your patience.
GrandCentral access currently unavailable
Currently GrandCentral is having some issues - which we expect to be resolved shortly.
This may affect logging in to some services - and any changes you need to make to your account.
Most build services and RUN@cloud applications should continue through this.
Thanks for your patience.
Wednesday, 27 April 2011
All Systems Fully Operational
All CloudBees Inc. systems are now operating correctly.
All customer data has been consistently restored to the point when connectivity was lost.
We are undertaking various activities to mitigate the risk of failure or data-loss occurring in the future.
For more information see our blog.
All customer data has been consistently restored to the point when connectivity was lost.
We are undertaking various activities to mitigate the risk of failure or data-loss occurring in the future.
For more information see our blog.
Monday, 25 April 2011
AWS Downtime - Final Status - 25-Apr-2011
During the week-end we have been able to unlock the last remaining SVN accounts, all services are now fully up and running, and all data has been full restored. No customers suffered from any loss of data.
As previously mentioned, we would like to apologize for this down-time and re-iterate that we will work on processes to make sure we can improve our availability in case of a future serious down-time of our hosting provider.
Onward,
Sacha Labourey
Friday, 22 April 2011
AWS Downtime - Status - 22-Apr-2011
Background
Starting yesterday at 12:00GMT, CloudBees has experience partial downtime of its services.
CloudBees is currently hosting its services on AWS, on the East-1 region. Yesterday, AWS started experiencing serious problems on their infrastructure. Most notably, its EBS service (used to store data) was primarily impacted.
While most of RUN@cloud applications were able to run properly, DEV@cloud services were impacted by AWS' outage. Also, our users were not able to log into our services anymore.
Actions taken
We have quickly taken actions to make sure that CloudBees' users were able to use our services by moving load to less impacted AWS zones. At midnight (GMT), most services were up and running again (login, SSO, GrandCentral, Jenkins, RUN@cloud). However, our forge services (Git, SVN and Maven repositories) were still experiencing difficulties.
Forge Status
Currently, we have been able to resume forge operations. However, the data we have been able to recover for now predates by 12h the initial down-time i.e. all data that would have been stored in Git/SVN/Maven between Thursday morning at 0:00GMT and Thursday 12:00GMT is not part of the recovered information. However, this information is not lost: it is being kept "hostage" of AWS' recovery procedure as they re-mirror their data. Once this process will be done, our goal is to reconcile the 12h of missing data with the current forge data. This process will depend on the type of repository:
- Git repositories: those repositories are accessible as of now. If you have information missing from the 12h period, you can simply PUSH from your local repository to recover information.
- Subversion repositories: we have decided to forbid WRITE access to those repositories and only allow READ access. That way, we will be able to properly reconcilie them once data is made available to us by AWS. If you do not want to wait and do the reconciliation by yourself now, please open a support ticket.
- Maven repositories: much like for the Git repositories, those are accessible in Read-Write and will be reconciliated once AWS is fully back online.
Next
We will provide a new report as soon as we have more information available from AWS.
We would like to apologize for this down-time and we will work on processes to make sure we can improve our availability in case of a future serious down-time of our hosting provider.
Subscribe to:
Posts (Atom)