<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>From the trenches &#8211; Serversaurus Blog</title>
	<atom:link href="https://blog.serversaurus.com.au/category/from-the-trenches/feed/" rel="self" type="application/rss+xml" />
	<link>https://blog.serversaurus.com.au</link>
	<description></description>
	<lastBuildDate>Fri, 19 Jul 2019 00:09:28 +0000</lastBuildDate>
	<language>en-AU</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=5.2.2</generator>

 
	<item>
		<title>Hosting one of the three largest comedy festivals in the world</title>
		<link>https://blog.serversaurus.com.au/hosting-one-of-the-three-largest-comedy-festivals-in-the-world/</link>
				<pubDate>Thu, 09 May 2019 04:20:41 +0000</pubDate>
		<dc:creator><![CDATA[Nick]]></dc:creator>
				<category><![CDATA[From the trenches]]></category>
		<category><![CDATA[Operations]]></category>
		<category><![CDATA[Servers]]></category>
		<category><![CDATA[Tech]]></category>

		<guid isPermaLink="false">https://blog.serversaurus.com.au/?p=346</guid>
				<description><![CDATA[Each year, Serversaurus collaborates closely with the Melbourne International Comedy Festival (MICF), as digital works ramp up before the one month festival begins in March. We work closely with the festival and their developers, preparing, testing and deploying adequate infrastructure to support the onslaught of ticketing, calendar and program traffic. The festival attracts close to 800,000 visitors each season, with&#46;&#46;&#46;]]></description>
								<content:encoded><![CDATA[
<p>Each year, Serversaurus collaborates closely with the Melbourne International Comedy Festival (MICF), as digital works ramp up before the one month festival begins in March. We work closely with the festival and their developers, preparing, testing and deploying adequate infrastructure to support the onslaught of ticketing, calendar and program traffic. </p>



<p>The festival attracts close to 800,000 visitors each season, with hundreds of thousands of tickets being sold and managed via the MICF website. Serversaurus has been collaborating with MICF by providing infrastructure, devops, support and 24&#215;7 management for the festival website since 2016. <strong>We&#8217;re proud to boast a zero downtime partnership</strong> since then, with traffic growing year-on-year. <strong>Overall traffic is up 65% since 2016, with views up by nearly quarter since 2018, reaching 4.8 million in 2019. </strong></p>



<p>The website and underlying infrastructure is one of the festivals most important communications channels, interacting with audiences, artists and key stakeholders. The website is critical throughout the entire festival, supporting patrons in navigating and planning the enormous festival program.</p>



<p>Serversaurus achieves high levels of redundancy from a hardware perspective, by running the infrastructure in a parallel configuration across a range of independent servers, including dedicated database and caching infrastructure. The entire puzzle is connected and kept in configuration through distributed service discovery, HAProxy, Ansible and in-house Go applications.</p>



<p>We can&#8217;t wait for 2020!</p>
]]></content:encoded>
										</item>
		<item>
		<title>Serversaurus v2 cloud</title>
		<link>https://blog.serversaurus.com.au/serversaurus-v2-cloud/</link>
				<pubDate>Tue, 10 Jul 2018 06:23:10 +0000</pubDate>
		<dc:creator><![CDATA[Nick]]></dc:creator>
				<category><![CDATA[From the trenches]]></category>
		<category><![CDATA[Operations]]></category>

		<guid isPermaLink="false">https://blog.serversaurus.com.au/?p=302</guid>
				<description><![CDATA[Over the last year, Serversaurus has been quietly building a version 2 cloud in parallel with our original legacy cloud which was first booted in 2010. This has been a complex and time-consuming process, requiring us to completely maintain and interconnect two physically disparate platforms, while continuing to provide 100% uptime for our customers. Storage resilience Our new v2 platform&#46;&#46;&#46;]]></description>
								<content:encoded><![CDATA[<p>Over the last year, Serversaurus has been quietly building a version 2 cloud in parallel with our original legacy cloud which was first booted in 2010.</p>
<p>This has been a complex and time-consuming process, requiring us to completely maintain and interconnect two physically disparate platforms, while continuing to provide 100% uptime for our customers.</p>
<h3>Storage resilience</h3>
<p>Our new v2 platform comes with a range of features and benefits, including storage and virtualisation resilience by design, accomplished through a new interconnected distributed storage network. Our new storage platform maintains two copies of all customer data across a network of SSD powered hypervisors. <strong>What this means, is that if a server fails with your virtual server on it, it will be automatically migrated onto standby capacity within the network.&nbsp;</strong></p>
<p>This design also allows us to hot-migrate customer infrastructure for maintenance. Commonly, underlying infrastructure maintenance on competitor platforms requires you to completely rebuild your infrastructure on new hardware via infrastructure automation tools. Our model allows you to move the entire virtual machine without touching the contents of the machine itself.</p>
<h3>Network resilience</h3>
<p>Our new v2 cloud has also given us the opportunity to redesign how to best network our infrastructure. Our cloud consists of three network layers: Storage, management &amp; transit.&nbsp; We&#8217;ve coupled storage &amp; transit onto stacked switches which run in parallel, allowing one switch to completely fail while maintaining 100% uptime. Our management network mirrors this topography through bonding. This means we can handle a complete switch failure on both of our core networks without disrupting availability. <strong>This level of hardware redundancy runs right the way up through our transit provider and associated 2N datacentre architecture.</strong></p>
<h3>Instant VM provisioning</h3>
<p>Other than better performance, better reliability and better architecture, in addition, we&#8217;ll be offering public access to our VM provisioning dashboard within our Melbourne PoP.</p>
]]></content:encoded>
										</item>
		<item>
		<title>Simple scaling with Serversaurus</title>
		<link>https://blog.serversaurus.com.au/simple-scaling-with-serversaurus/</link>
				<pubDate>Tue, 14 Mar 2017 04:31:27 +0000</pubDate>
		<dc:creator><![CDATA[Nick]]></dc:creator>
				<category><![CDATA[From the trenches]]></category>

		<guid isPermaLink="false">https://blog.serversaurus.com.au/?p=189</guid>
				<description><![CDATA[From the trenches: Custom high traffic applications, devops, management &#38; technical insights from Serversaurus projects. Overview If you&#8217;re a small to medium sized web development agency used to working primarily in single-node environments (often utilising off-the-shelf-CMS platforms in traditional LAMP environments), where do you go to achieve some semblance of scale, when you land a large project without completely changing&#46;&#46;&#46;]]></description>
								<content:encoded><![CDATA[<p><strong>From the trenches:</strong> <em>Custom high traffic applications, devops, management &amp; technical insights from Serversaurus projects.</em></p>
<h2>Overview</h2>
<p>If you&#8217;re a small to medium sized web development agency used to working primarily in single-node environments (often utilising off-the-shelf-CMS platforms in traditional LAMP environments), where do you go to achieve some semblance of scale, when you land a large project without completely changing tack? Often, to scale within a more traditional PaaS / 12 Factor Application environment, developers are forced into a complex deployment and development methodology to scale their applications. This is perfectly fine if you are developing a product/application from the ground up which will be locked to a PaaS provider&#8217;s application ecosystem, however, many web agencies are building on top of their existing CMS platforms, or utilising off the shelf CMS solutions such as Expression Engine, Craft, etc.</p>
<p>We recently worked on a fairly large website with a difficult traffic profile (ticketing, high profile media announcements, etc), built in a standard LAMP based CMS stack, which required a simple solution both from a technical and management perspective &#8211; a solution we could configure &amp; setup which could still provide enough control for developers to do their work, without having to interact with us for cluster management or requiring the developers to necessarily customise their application for the environment itself.</p>
<p>To achieve simplicity of deployment which was familiar to the developers, along with an architecture the application itself could function in, we developed an application cluster which was completely controllable via a custom management UI. This solution featured traditional web service technologies including load balancing, MySQL, nginx and Varnish caching, which could be completely managed by the web dev agency independently of us.</p>
<h2>Deployment Groups</h2>
<p>Special requirements included two deployment groups of web nodes (Deployment Groups app-a, app-b), allowing the developers to transparently pull a two-node group offline for management/code deployments, without disrupting the live website. This deployment architecture still left two parallel web nodes live online at any one time in order to cope with high traffic, even if a code re-deployment was required during a busy period.</p>
<p><img class="alignnone size-full wp-image-190" src="https://blog.serversaurus.com.au/wp-content/uploads/2017/12/cluster_diagram-1-1.png" alt="" width="600" height="1179" srcset="https://blog.serversaurus.com.au/wp-content/uploads/2017/12/cluster_diagram-1-1.png 600w, https://blog.serversaurus.com.au/wp-content/uploads/2017/12/cluster_diagram-1-1-153x300.png 153w, https://blog.serversaurus.com.au/wp-content/uploads/2017/12/cluster_diagram-1-1-521x1024.png 521w" sizes="(max-width: 600px) 100vw, 600px" /></p>
<h2>Caching</h2>
<p>Because of the expected high traffic profile of the site, Varnish caching was a mandatory requirement to &#8216;protect&#8217; the dynamic infrastructure from unnecessary work. As anyone who has worked with any kind of caching will know, there are times when caching needs to be eliminated to debug &#8211; we provided a caching &#8216;switch&#8217; to pass-thru all traffic at the click of a button.</p>
<h2>Cache Invalidation</h2>
<p>Additional caching management tools included an invalidation dropdown for the live and staging environments, allowing developers to easily flush/ban the Varnish cache on demand.</p>
<h2>VCL Includes</h2>
<p>As a means to provide a simplified method for developer access to custom VCL rules, we provided an interface for the developers to easily append their custom Varnish rules to the main Varnish configuration.</p>
<h2>Technologies</h2>
<p>The primary nodes were built using CentOS templates, managed by a suite of customised Serversaurus <a href="https://www.ansible.com/" target="_blank" rel="noopener">Ansible</a> recipes which were distributed across a range of physical hypervisors for redundancy. The entire cluster sits behind a <a href="http://www.haproxy.org/" target="_blank" rel="noopener">HAProxy</a> loadbalancer, utilising the <a href="https://www.consul.io/" target="_blank" rel="noopener">Consul</a> key value store for dynamic service configuration management, managed by a customised Go-based management UI.</p>
]]></content:encoded>
										</item>
	</channel>
</rss>
