partners & customers of VMware NSX alike

  • 0

partners & customers of VMware NSX alike

VMware vSAN vExperts 2018.
June 8, 2018 Leave a comment I’ve just found out that I’ve been selected to be a vSAN vExpert again this year which was great news indeed.
The complete list of vSAN vExperts 2018 can be found at https://blogs.vmware.com/vmtn/2018/06/vexpert-vsan-2018-announcement.html vSAN vExpert programme is a sub programme of the wider VMware vExpert programme where out of those already selected vExperts, people who have shown specific speciality and thought leadership around vSAN & related Hyper-Converged technologies are being recognised specifically for their efforts.
vSAN vExpert programme only started back in 2016 and while I missed out during the first year, I was also a vSAN vExpert in 2017 too so it’s quite nice to have been selected again for 2018.
As a part of the vSAN vExpert program, selected members typically are entitled to a number of benefits such as NFR license keys for full vSAN suite for lab and demo purposes, access to vSAN product management team at VMware, exclusive webinars & NDA meetings, access to preview builds of the new software and also get a chance to provide feedback to the product management team on behalf of our clients which is great for me as technologist working in the channel.
I have been a big advocate of Software Defined everything for about 15 years now as, they way I saw it, the power in most technologies are often derived from software.
Public cloud is the biggest testament for this we can see today.
So when HCI became a “thing”, I was naturally a big promoter of the concept and realistically, the Software Defined Storage (SDS) which made HCI what it is, was something I’ve always seen the value in.
While there are many other SDS tech have started to appear since then, vSAN was always something unique in that it’s more tightly coupled to the underlying hypervisor like no other HCI / SDS solution and this architectural difference was the main reason why I’ve always liked and therefore promoted the vSAN technology from beta days.
Well, vSAN revenue numbers have grown massively for VMware since its first launch with vSAN 5.5 and now, the vSAN business unit within VMware is a self sufficient business in its own right.
Since I am fortunate to be working for a VMware solutions provider partner here in the UK, I have seen first hand the number of vSAN solutions we’ve sold to our own customers have grown over 900% year on year between 2016 and 2017 which fully aligns with wider industry adoption of vSAN as a preferred storage option for most vSphere solutions.
This will only likely going to increase and some of the hardware innovation coming down the line such as Storage Class Memory integration and NVMe over Fabric technologies will further enhance the performance and reliability of genuinely distributed software defined storage technologies such as vSAN.
So being recognised as a thought leader and a community evangelist for vSAN by VMware is a great honour as I can continue to share my thoughts, updates on the product development with the wider community for other people to benefit from.
So thank you VMware for the honour again this year, and congratulations for all the others who have also been selected to be vSAN vExperts 2018.
Keep sharing your knowledge and thought leadership content….
vExperts, SDS, vExpert, , VSAN, vSAN vExpert 2018 VMworld 2017 – vSAN New Announcements & Updates.
August 28, 2017 1 Comment During VMworld 2017 Vegas, a number of vSAN related product announcements will have been made and I was privy to some of those a little earlier than the rest of the general public, due being a vSAN vExpert.
I’ve summerised those below.
The embargo on disclosing the details lifts at 3pm PST which is when this blog post is sheduled to go live automatically.
So enjoy.
???? vSAN Customer Adoption.
As some of you may know, popularity of vSAN has been growing for a while now as a preferred alternative to legacy SAN vendors when it comes to storing vSphere workloads.
The below stats somewhat confirms this growth.
I too can testify to this personally as I’ve seen a similar increase to the number of our own customers that consider vSAN as the default choice for storage now.
Key new Announcements.
New vSAN based HCI Acceleration kit availability.
This is a new ready node program being announced with some OEM HW vendors to provide distributed data center services for data centers to keep edge computing platforms.
Consider this to be somewhat in between vSAN RoBo solution and the full blown main data center vSAN solution.
Highlights of the offering are as follows 3 x Single socket servers.
Include vSphere STD + vSAN STD (vCenter is excluded).
Launch HW partners limited to Fujitsu, Lenovo, Dell & Super Micro only.
25% default discount on list price (on both HW & SW).
$25K starting price.
My thoughts: Potentially a good move an interesting option for those customers who have a main DC elsewhere or are primarily cloud based (included VMware Cloud on AWS).
The practicality of vSAN RoBo was always hampered by the fact that its limited to 25 VMs on 2 nodes.
This should slightly increase that market adoption, however the key decision would be the pricing.
Noticeably HPe are absent from the initial launch but I’m guessing they will eventually sign up. Note you have to have an existing vCenter license elsewhere as its not included by default.
vSAN Native Snapshots Announced.
Tech preview of the native vSAN data protection capabilities through snapshots have been announced and will likely be generally available in FY18.
vSAN native snapshots will have the following characteristics.
Snapshots are all policy driven.
5 mins RPO.
100 snapshots per VM.
Support data efficiency services such as dedupe as well as protection services such as encryption.
Archival of snapshots will be available to secondary object or NAS storage (no specific vendor support required) or even Cloud (S3?).
Replication of snapshots will be available to a DR site.
My thoughts: This was a hot request and something that was long time coming.
Most vSAN solutions need a 3rd party data center back up product today and often, SAN vendors used to provide this type of snapshot based backup solution from the array (NetApp Snap Manager suite for example) that vSAN couldn’t match.
Well, it can now, and since its done at the SW layer, its array independent and you can replicate or archive that anywhere, even on cloud and this would be more than sufficient for lots of customers with a smaller or a point use case to not bother buying backup licenses elsewhere to protect that vSphere workload.
This is likely going to be popular.
I will be testing this out in our lab as soon as the beta code is available to ensure the snaps don’t have a performance penalty.

VSAN on VMware Cloud on AWS Announced

Well, this is not massively new but vSAN is a key part of VMware Cloud on AWS and the vSAN storage layer provide all the on premise vSAN goodness while also providing DR to VMware Cloud capability (using snap replication) and orchestration via SRM.
vSAN Storage Platform for Containers Announced.
Similar to the NSX-T annoucement with K8 (Kubernetes) support, vSAN also provide persistent storage presentation to both K8 as well as Docker container instances in order to run stateful containers.
This capability came from the vmware OpenSource project code named project Hatchway and its freely available via GitHub https://vmware.github.io/hatchway/ now.
My thoughts: I really like this one and the approach VMware are taking with the product set to be more and more microservices (container based application) friendly.
This capability came from an opensource VMware project called Project hatchway and will likely be popular with many.
This code was supposed to be available on GitHub as this is an opensource project but I have not been able to see anything within the VMware repo’s on GitHub yet.
So, all in all, not very many large or significant announcements for vSAN from VMworld 2017 Vegas (yet), but this is to be expected as the latest version of vSAN 6.6.1 was only recently released with a ton of updates.
The key take aways for me is that the popularity of vSAN is obviously growing (well I knew this already anyways) and the current and future announcements are going to be making vSAN a fully fledged SAN / NAS replacement for vSphere storage with more and more native security, efficiency and availability services which is great for the customers.
Hyper-Converged, , , Docker, K8, Kubernetes, microservices, persistent storage, SDS, Snapshots, Virtual SAN, , , VMware Cloud on AWS, , VSAN, vSAN Acceleration Kit VMware vSAN 6.6 Release – Whats New.
April 11, 2017 1 Comment VMware has just annouced the general availability of the latest version of vSAN which is the backbone of their native Hyper Converged Infrastructure offering with vSphere.
vSAN has had a number of significant upgrades since its very first launch back in 2014 as version 5.5 (with vSphere 5.5) and each upgrade has added some very cool, innovative features to the solution which has driven the customer adoption of vSAN significantly.
The latest version vSAN 6.6 is no different and by far it appears to be have the highest number of new features announced during an upgrade release.
Given below is a simple list of some of the key features of vSAN 6.6 which is the 6th generation of the products Additional native security features.
HW independent data at rest encryption (Software Defined Encryption) Software Defined AES 256 encryption.
Supported on all flash and hybrid.
Data written already encrypted.
KMS works with 3rd party KMS systems.
Built-in compliance with dual factor authentication (RSA secure ID and Smart-card authentication).
Stretched clusters with local failure protection.
With vSAN 6.6, if a site fails, surviving site will have local host and disk group protection still (not the case with the previous versions) RAID 1 over RAID 1/5/6 is supported on All Flash vSAN only.
RAID 1 over RAID 1 is supported on Hybrid vSAN only.
Proactive cloud analytics.
This sounds like its kind of similar to Nimble’s cloud analytics platform which is popular with customers.
With proactive cloud analytics, it uses data collected from VSAN support data globally to provide analytics through the vSAN health UI, along with some performance optimization advice for resolving performance issues.
Intelligent & Simpler operations.
Simpler setup and post set up operations are achieved through a number of new features and capabilities.
Some of the key features include, Automated setup with 1 click installer & lifecycle management.
Automated configuration & compliance checks for vSAN cluster (this was somewhat already available through vSAN health UI).
Additions include, Networking & cluster configurations assistance.
New health checks for encryption, networking, iSCSI, re-sync operations.
Automated controller firmware & driver upgrades This automates the download and install of VMware supported drivers for various hard drives and RAID controllers (for the entire cluster) which is significantly important.
I think this is pretty key as the number of vSAN performance issues due to firmware mismatch (especially on Dell server HW) has been an issue for a while now.
Proactive data evacuation from failing drives.
Rapid recovery with smart, efficient rebuild.
Expanded Automation through vSAN SDK and PowerCLI.
High availability.
vSAN 6.6 now includes a highly available control plane which means the resilient management is now possible independent of vCenter.
Other key features.
Increased performance Optimized for latest flash technologies involving 1.6TB flash (Intel Optane drives anyone??).
Optimize performance with actionable insights.
30% faster sequential write performance.
Optimized checksum and dedupe for flash.
Certified file service and data protection (through 3rd party partners).
Native vRealize Operations integrations.
Simple networking with Unicast.
Real time support notification and recommendations.
Simple vCenter install and upgrade.
Support for Photon 1.1.
Expanded caching tier choices.
There you go.
Another key set of features added to vSAN with the 6.6 upgrade which is great to see.
If you are a VMware vSphere customer who’s looking at a storage refresh for your vSphere cluster or a new vSphere / Photon / VIC requirement, it would be silly not to look in to vSAN as opposed to looking at legacy hardware SAN technologies from a legacy vendor (unless you have non VMware requirements in the data center).
If you have any questions or thoughts, please feel free to comment / reach out Additional details of whats new with VMware vSAN 6.6 is avaiable at https://blogs.vmware.com/virtualblocks/2017/04/11/whats-new-vmware-vsan-6-6/ 6.6, , 6.6, Analytics, authentication, Cloud, dual, encryption, factor, New vSAN, , VSAN, .

VSAN 6.6 New Dedicated VSAN Management Plugin For vROps Released

December 20, 2016 Leave a comment Some of you may have seen the tweets and the article from legendary Duncan Epping here about the release of the new VMware VSAN plugin for vROPS (vRealize Operations Management Pack for vSAN version 1.0) If you’ve ever had the previous VSAN plugin for vROps deployed, you might know that it was not a dedicated plugin for VSAN alone, but was a vRealize Operations Management Pack for Storage Devices as a whole which included not just the visibility in to VSAN but also legacy storage stats such as FC, iSCSI and NFS for legacy storage units (that used to connect to Cisco DCNM or Brocade Fabric switches).
This vROps plugin for vSAN  however is the first dedicated plugin for VSAN (hence the version 1.0) on vROps.
According to the documentation it has the following features Discovers vSAN disk groups in a vSAN datastore.
Identifies the vSAN-enabled cluster compute resource, host system, and datastore objects in a vCenter Server system.
Automatically adds related vCenter Server components that are in the monitoring state.
How to Install / Upgrade from the previous MPSD plugin.
Download the management pack (.pak file) https://solutionexchange.vmware.com/store/products/vmware-vrealize-operations-management-pack-for-vsan.
Login to the vROps instance as the administrator / with administrative privileges and go to Administration -> Solutions.
Click add (plus sign) and select the.
Pak file and select the 2 check boxes to replace if already installed and reset default content.
Accept any warnings and click upload.
Once the upload is complete and staged, verify the signature validity and click next to proceed.
Click next and accept the EULA and proceed.
The management plugin will start to install.
Now select the newly installed management plugin for VSAN and click configure.
Within this window, connect to the vCenter server (cannot use previously configured credentials for MPSD).
When creating the credentials, you need to specify an admin account for the vCenter instance. Connection can be verified using the test button.
Once connected, wait for the data collection from VSAN cluster to complete and verify collection is showing.

Go to Home and verify that the VSAN dedicated dashboard items are now available on vROps

By Default there will be 3 VSAN specific dashboard available now as follows under default dashboards vSAN Environment Overview – This section provide some vital high level information on the vSAN cluster including its type, total capacity, used, any congestion if available, and average latency figures along with any active alerts on the VSAN cluster.
As you can see I have a number of alerts due to using non-compliant hardware in my VSAN cluster.
vSAN Performance This default dashboard provide various performance related information / stats for the vSAN cluster rand datastores as well as the VM’s residing on it.
You can also check performance such as VM latency and IOPS levels based on the VM’s you select on the tile view and the trend forecast which is think is going to be real handy..
Similarly, you can see performance at vSAN disk group level also which shows information such as Write buffer performance or Reach cache performance levels, current as well as future forecasted levels which are new and were not previously accessible easily.
You can also view the performance at ESXi host level which shows the basic information such as current CPU utilisation as well as RAM including current and future (forecast) trend lines in true vROps style which are going to be really well received.
Expect the content available on this ppage to be significantly extended in the future iterations of this mgmt.
Optimize vSAN Deployments – This page provide a high level comparison of vSAN and non vSAN enviorments which would be especially handy if you have vSAN datastores alongside traditional iSCSI or NFS data stores to see how for example, IOPS and latency compares between VM’s on VSAN and an NFS datastore presented to the same ESXi server (I have both).
Under Environment -> vSAN and Storage Devices, additional vSAN hierarchy information such as vSAN enabled clusters, Fault domains (if relevant), Disk groups and Witness hosts (if applicable) are now visible for monitoring which is real handy..
In the inventory explorer, you can see the list of vSAN inventory items that the data are being collected for..
All in all, this is a welcome addition and will only continue to be improved and new monitoring features added as we go up the versions.
I realy like the dedicated plugin factor as well as the nice default dashboards included with this version which no doubt will help customers truly use vROps as a single pane of glass for all things monitoring on the SDDC including VSAN.
6.2, 6.5, vRealize Operations, vSAN Misc 6.2, 6.5, Plugin, , vRealize Operations, vROps, VSAN, VSAN plugin for vROPS VMware Storage and Availability Technical Documents Hub.
November 8, 2016 Leave a comment This was something I came across accidentally so thought it may be worth a very brief post about as I found some useful content there.
VMware Storage and Availability Technical Documents Hub, is an online repository of technical documents and “how to” guides including video documents for all storage and availability products within VMware.
Namely, .

It has some very useful contents for 4 VMware product categories (as of now) VSAN

Virtual Volumes.
vSphere Replication.
For example, under the VSAN section, there are a whole heap of VSAN 6.5 contents such as technical information on what’s new with VSAN 6.5, how to design and deploy VSAN 6.5…etc as well as some handy video’s on how to configure some of those too.
There also seem to be some advanced technical documentation around VSAN caching algorithms…etc & some deployment guides which I though was quite handy.
Similarly there are some good technical documentation around vVols including overview, how to set up and implement VVols…etc.
However in comparison, the content is a little light for the others compared to VSAN, but I’m sure more content will be added as the portal gets developed further.
All the information are presented in HTML5 interface which is easy to navigate with handy option to print to PDF option on all pages if you wanna download the content as a PDF for offline reading which is cool.
I’d recommend you to check this documentation hub, especially if you use any storage solution from VMware like VSAN and would like to see most of the relevant technical documentation all in a single place.
6.5, SRM, , , vSphere Replication, vVOL Availability, Documents, Howto, storage, storagehub, Technical, , VSAN, VSAN 6.5 VSAN, NSX on Cisco Nexus, vSphere Containers, .

NSX Future & a chat with VMware CEO – Highlights Of My Day 2 at VMworld 2016 US

September 5, 2016 Leave a comment In this post,  I will aim to highlight the various breakout sessions I’ve attended during the day 2 at VMworld 2016 US, key items / notes / points learnt and few other interesting things I was privy to  during the day that is worth mentioning, along with my thoughts on them…!.

Day 2 – Breakout Session 1 – Understanding the availability features of VSAN

Session ID: STO8179R.
Presenters: GS Khalsa – Sr.
Technical Marketing manager – VMware (@gurusimran).
Jeff Hunter – Staff Technical Marketing Architect – VMware (@Jhuntervmware).
In all honesty, I wasn’t quite sure why I signed up to this breakout session as I know VSAN fairly well, including its various availability features as I’ve been working with testing & analysing its architecture and performance when VSAN was first launched to then designing and deploying VSAN solutions on behalf of my customers for a while.
However, having attended the session it reminded me of a key fact that I normally try to never forget which is “you always learn something new” even when you think you know most of it.
Anyways, about the session itself, it was good and was mainly aimed at the beginners to VSAN but I did manage to learn few new things as well as refresh my memory on few other facts, regarding VSAN architecture.
The key new ones I learnt are as follows VSAN component statuses (as shown within vSphere Web Client) and their meanings Absent This means VSAN things the said component will probably return.
Examples are, Host rebooted.
Disk pulled.
NW partition.
Rebuild starts after 60 mins.
When an item is detected / marked as absent, VSNA typically wait for 60 minutes before a rebuild is started in order to allow temporary failure to rectify itself This means for example, pulling disks out of VSAN will NOT trigger an instant rebuild / secondary copy…etc.
so it wont be an accurate test of VSAN.
Degraded This typically means the device / component is unlikely to return.
Examples include, A permeant Device Loss (PDL) or a failed disk.
When a degraded item is noted, a rebuild started immediately.
Active-Stale This means the device is back online from a failure (i.e.
was absent) but the data residing on it are NOT up to date.
VSAN drive degradation monitoring is proactively logged in the following log files vmkernel.log indicating LSOM errors.
Dedupe and Compression during drive failures During a drive failure, de-duplication and compression (al flash only) is automatically disabled – I didn’t know this before.
Day 2 – Breakout Session 2 – How to deploy VMware NSX with Cisco Nexus / UCS Infrastructure.
Session ID: NET8364R.
Presenters: Paul Mancuso – Technical Product Manager (VMware).
Ron Fuller – Staff System Engineer (VMware).
This session was about a deployment architecture for NSX which is becoming increasingly popular, which is about how to design & deploy NSX on top of Cisco Nexus switches with ACI as the underlay network and Cisco UCS hardware.
Pretty awesome session and a really popular combination too.
(FYI – I’ve been touting that both these solutions are better together since about 2 years back and its really good to see both companies recognising this and now working together on providing guidance stuff like these).
Outside of this session I also found out that the Nexus 9k switches will soon have the OVS DB support so that they can be used as TOR switches too with NSX (hardware VTEP to bridge VXLANs to VLANs to communication with physical world), much like the Arista switches with NSX – great great news for the customers indeed.
I’m not going to summarise the content of this session but wold instead like to point people at the following 2 documentation sets from VMware which covers everything that this session was based on, its content and pretty simply, everything you need to know when designing NSX solutions together with Cisco ACI using Nexus 9K switches and Cisco UCS server hardware (blades & rack mounts) Design Guide for VMware NSX running with a Cisco ACI Underlay Fabric.
Reference Design: Deploying NSX for vSphere with Cisco UCS and Nexus 9000 Switch Infrastructure.
One important thing to keep in mind for all Cisco folks though: Cisco N1K is NOT supported for NSX.
All NSX prepped clusters must use vDS.
I’m guessing this is very much expected and probably only a commercial decision rather than a technical one.
Personally I am super excited to see VMware ands Cisco are working together again (at least on the outset) when it comes to networking and both companies finally have realised the use cases of ACI and NSX are somewhat complementary to each other (i.e.
ACI cannot do most of the clever features NSX is able to deliver in the virtual world, including public clouds and NSX cannot do any of the clever features ACI can offer to a physical fabric).
So watch this space for more key joint announcements from both companies…!.
Day 2 – Breakout Session 3 – Containers for the vSphere admin.
Session ID: CNA7522.
Presenters: Ryan Kelly – Staff System Engineer (VMware).
A session about how VMware approaches the massive buzz around containerisation through their own vSphere integrated solution (VIC) as well as a brand new hypervisor system designed from ground up with containerisation in mind (Photon platform).
This was more of a refresher session for than anything else and I’m not going to summarise all of it but instead, will point you to the dedicated post I’ve written about VMware’s container approach here.
Day 2 – Breakout Session 4 – The architectural future of Network Virtualisation.
Session ID: NET8193R Presenters: Bruce Davie – CTO, Networking (VMware).
Probably the most inspiring session of the day 2 as Bruce went through the architectural future of NSX where he described what the NSX team within VMware are focusing on as key improvements & advancements of the NSX platform.
The summary of the session is as follows NSX is the bridge from solving today’s requirement to solving tomorrow’s IT requirements Brings remote networking closer easily (i.e.
Stretched L2).
Programtically (read automatically) provisoned on application demand.
Security ingrained at a kernel level and every hop outwards from the applications.
Challenges NSX is trying address (future) Developers – Need to rapidly provision and destroy complex networks as a pre-reqs for applications demanded by developers.
Micro services – Container networking ands security.
Unseen future requirements.
Current NSX Architecture Cloud consumption plane.
Management plane.
Control plane.
Data plane.
Future Architecture – This is what the NSX team is currently looking at for NSX’s future.
Management plane scale out Management plane now needs to be highly available in order to constantly keep taking large number of API calls for action from cloud consumption systems such as OpenStack, vRA.etc – Developer and agile development driven workflows….etc.
Using & scaling persistent memory for the NSX management layer is also being considered – This is to keep API requests in persistent memory in a scalable way providing write and read scalability & Durability.
Being able to take consistent NSX snapshots – Point in time backups.
Distributed log capability is going to be key in providing this management plane scale out whereby distributed logs that store all the API requests coming from Cloud Consumption Systems will be synchronously stored across multiple nodes providing up to date visibility of the complete state across to all nodes, while also increasing performance due to management node scale out.
Control plane evolution Heterogeneity Currently vSphere & KVM.
Hyper-V support coming.
Control plane will be split in to 2 layers Central control plane.
Local control plane Data plane (Hyper-V, vSphere, KVM) specific intelligence.
High performance data plane Use the Intel DPDK – A technology that optimize packet processing in Intel CPU Packet switching using x86 chips will be the main focus going forward and new technologies such as DPDK will only make this better and better.
DPDK capacities are best placed to optimise iterative processing rather than too many context switching.
NSX has these optimisation code built in to its components Use DPDK CPUs in the NSX Edge rack ESXi servers is  a potentially good design decision?.
Possible additional NSX use cases being considered NSX for public clouds NSX OVS and an agent is deployed to in guest – a technical preview of this solution was demoed by Pat Gelsinger during the opening key note on day 1 of VMworld.
NSX for containers 2 vSwitches 1 in guest.
1 in Hypervisor.
My thoughts I like what I heard from the Bruce about the key development focus areas for NSX and looks like all of us, partners & customers of VMware NSX alike, are in for some really cool, business enabling treats from NSX going forward, which kind of reminds me of when vSphere first came out about 20 years ago :-).
I am extremely excited about the opportunities NSX present to remove what is often the biggest bottleneck enterprise or corporate IT teams have to overcome to simply get things done quickly and that is the legacy network they have. Networks in most organisations  are still very much managed by an old school minded, networking team that do not necessarily understand the convergence of networking with other silos in the data center such as storage and compute, and most importantly when it comes to convergence with modern day applications.
It is a fact that software defined networking will bring the efficiency to the networking the way vSphere brought efficiency to compute (want examples how this SDN efficiency is playing today.
Look at AWS and Azure as the 2 biggest use cases) where the ability to spin up infrastructure, along with a “virtual” networking layer significantly increases the convenience for the businesses to consume IT (no waiting around for weeks for your networking team to set up new switches with some new VLANs…etc.) as well as significantly decreasing the go to market time for those businesses when it comes to launching new products / money making opportunities.
All in all, NSX will act as a key enabler for any business, regardless of the size to have an agile approach to IT and even embrace cloud platforms.
From my perspective, NSX will provide the same, public cloud inspired advantages to customers own data center and not only that but it will go a step further by effectively converting your WAN to an extended LAN by bridging your LAN with a remote network / data center / Public cloud platform to create something like a LAN/WAN (Read LAN over WAN – Trade mark belongs to me :-))which can automatically get deployed, secured (encryption) while also being very application centric (read “App developers can request networking configuration through an API as a part of the app provisioning stage which can automatically apply all the networking settings including creating various networking segments, routing in between & the firewall requirements…etc.
Such networking can be provisioned all the way from a container instance where part of the app is running (i.e.
DB server instance as a container service) to a public cloud platform which host the other parts (i.e.
Web servers).
I’ve always believed that the NSX solution offering is going to be hugely powerful given its various applications and use cases and natural evolution of the NSX platform through the focus areas like those mentioned above will only make it an absolute must have for all customers, in my humble view.
Day 2 – Meeting with Pat Gelsinger and Q&A’s during the exclusive vExpert gathering.
As interesting as the breakout sessions during the day have been, this was by far the most significant couple of hours for me on the day.
As a #vExpert, I was invited to an off site, vExpert only gathering held at Vegas Mob Museum which happened to include VMware CEO, Pat Gelsinger as the guest of honour.
Big thanks to the VMware community team lead by Corey Romero (@vCommunityGuy) for organising this event.
This was an intimate gathering for about 80-100 VMware vExperts who were present at VMworld to meet up at an off site venue and discuss things and also to give everyone a chance to meet with VMware CEO and ask him direct questions, which is something you wouldn’t normally get as an ordinary person so it was pretty good.
Pat was pretty awesome as he gave a quick speech about the importance of vExpert community to VMware followed up by a Q&A session where we all had a chance to ask him questions on various fronts.
I myself started the Q&A session by asking him the obvious question, “What would be the real impact on VMware once the Dell-EMC merger completes” and Pats answer was pretty straight forward.
As Michael Dell (who happened to come on stage during the opening day key note speech said it himself), Dell is pretty impressed with the large ecosystem of VMware partners (most of whom are Dell competitors) and will keep that ecosystem intact going forward and Pat echoed the same  message, while also hinting that Dell hardware will play a key role in all VMware product integrations, including using Dell HW by default in most pre-validated and hyper-converged solution offerings going forward, such as using Dell rack mount servers in VCE solutions….etc.
(in Pat’s view, Cisco will still play a big role in blade based VCE solution offerings and they are unlikely to walk away from it all just because of Dell integration given the substantial size of revenue that business brings to Cisco).
If I read in between the lines correctly (may be incorrect interpretations from my end here),  he also alluded that the real catch of the EMC acquisition as far as Dell was concerned was VMware.
Pat explained that most of the financing charges behind the capital raised by Dell will need to be paid through EMC business’s annual run rate revenue (which by the way is roughly the same as the financing interest) so in a way, Dell received VMware for free and given their large ecosystem of partners all contributing towards VMware’s revenue, it is very likely Dell will continue to let VMware run as an independent entity.
There were other interesting questions from the audience and some of the key points made by Pat in answering those questions were, VMware are fully committed to increasing NSX adoption by customers and sees NSX as a key revenue generator due to what it brings to the table – I agree 100%.

VMware are working on the ability to provide networking customers through NSX

a capability similar to VMotion for compute as one of their (NSX business units) key goals.
Pat mentioned that engineering in fact have this figured out already and testing internally but not quite production ready.
In relation to VMware’s Cross Cloud Services as a service offering (announced by Pat during the event opening keynote speech), VMware are also working on offering NSX as a service – Though the detail were not discussed, I’m guessing this would be through the IBM and vCAN partners.
Hinted that a major announcement on the VMware Photon platform  (One of the VMware vSphere container solutions) will be taking place during VMworld Barcelona – I’ve heard the same from the BU’s engineers too and look forward to Barcelona announcements.
VMware’s own cloud platform, vCloud air WILL continue to stay focused on targeted use cases while the future scale of VMware’s cloud business will be expected to come from the vCAN partners (hosting providers that use VMware technologies and as a result are part of the VMware vCloud Air Network…i.e IBM).
Pat also mentioned about the focus VMware will have on IOT and to this effect, he mentioned about the custom IOT solution VMware have already built or working on (I cannot quite remember which was it) for monitoring health devices through the Android platform – I’m guessing this is through their project ICE and LIOTA (Little IOT Agent) platform which already had similar device monitoring solutions being demoed in the solutions exchange during VMworld 2016.
I mentioned about that during my previous post here.
It was really good to have had the chance to listen to Pat up close and be able to ask direct questions and get frank answers which was a fine way to end a productive and an education day for me at VMworld 2016 US Image credit goes to VMware.!.
ACI, Cisco, , , , , ACI, Cisco, Day 2, Gelsinger, , NSX future, Pat, Session Summary, vExpert, , , VSAN VVDs, Project Ice, vRNI & NSX – Summary Of My Breakout Sessions From Day 1 at VMworld 2016 US –.
August 30, 2016 Leave a comment Quick post to summerise the sessions I’ve attended on day 1 at @VMworld 2016 and few interesting things I’ve noted.
First up are the 3 sessions I had planned to attend + the additional session I managed to walk in to.
Breakout Session 1 – Software Defined Networking in VMware validated Designs.
Session ID: SDDC7578R.
Presenter: Mike Brown – SDDC Integration Architect (VMware).
This was a quick look at the VMware Validated Designs (VVD) in general and the NSX design elements within the SDDC stack design in the VVD.
If you are new to VVD’s and are typically involved in designing any solutions using the VMware software stack, it is genuinely worth reading up on and should try to replicate the same design principles (within your solution design constraints) where possible.
The diea being this will enable customers to deploy robust solutions that have been pre-validated by experts at VMware in order to ensure the ighest level of cross solution integrity for maximum availability and agility required for a private cloud deployment.
Based on typical VMware PSO best practices, the design guide (Ref architecture doc) list out each design decision applicable to each of the solution components along with the justification for that decision (through an explanation) as well as the implication of that design decision.
An example is given below I first found out about the VVDs during last VMworld in 2015 and mentioned in my VMworld 2015 blog post here.
At the time, despite the annoucement of availability, not much content were actually avaialble as design documents but its now come a long way.
The current set of VVD documents discuss every design, planning, deployment and operational aspect of the following VMware products & versions, integrated as a single solution stack based on VMware PSO best practises.
It is based on a multi site (2 sites) production solution that customers can replicate in order to build similar private cloud solutions in their environments.
These documentation set fill a great big hole that VMware have had for a long time in that, while their product documentation cover the design and deployment detail for individual products, no such documentaiton were available for when integrating multiple products and with VVD’s, they do now.
In a way they are similar to CVD documents (Cisco Validated Designs) that have been in use for the likes of FlexPod for VMware…etc.
VVD’s generally cover the entire solution in the following 4 stages.
Note that not all the content are fully available yet but the key design documents (Ref Architecture docs) are available now to download.
Reference Architecture guide Architecture Overview.
Detailed Design.
Planning and preperation guide.
Deployment Guide Deployment guide for region A (primary site) is now available.
Operation Guide Monitoring and alerting guide.
backup and restore guide.
Operation verification guide.
If you want to find out more about VVDs, I’d have a look at the following links.
Just keep in mind that the current VVD documents are based on a fairly large, no cost barred type of design and for those of you who are looking at much smaller deployments, you will need to exercise caution and common sense to adopt some of the recommended design decisions to be within the appplicable cost constraints (for example, current NSX design include deploying 2 NSX managers, 1 integrated with the management cluster vCenter and the other with the compute cluster vCenter, meaning you need NSX licenses on the management clutser too. This may be an over kill for most as typically, for most deployments, you’d only deploy a single NSX manager integrated to the compute cluster) Home page: http://www.vmware.com/solutions/software-defined-datacenter/validated-designs.html.
VVD Download page: https://www.vmware.com/support/pubs/vmware-validated-design-pubs.html.
As for the Vmworld session itself, the presenter went over all the NSX related design decisions and explained them which was a bit of a waste of time for me as most people would be able to read the document and understand most of those themselves.
As a result I decided the leave the session early, but have downloaded the VVD documents in order to read throughly at leisure.
???? Breakout Session 2 – vRA, API, Ci Oh My!.
Session ID: DEVOP7674.
Presenters Kris Thieler – Staff engineer (VMware).
Ryan Kelly – Staff engineer (VMware).
As I managd to leave the previous session early, I manage to just walk in to this session which had just started next door and both Kris and Ryan were talking about the DevOps best practises with vRealize Automation and vrealize Code Stream.
they were focusing on how developpers who are using agile development that want to invoke infrastructure services can use these products and invoke their capabilities through code, rather than through the GUI.
One of the key focus areas was the vRA plugin for Jenkins and if you were a DevOps person of a developper, this session content would be great value.
if you can gain access to the slides or the session recordings after VMworld (or planning to attend VMworld 2016 Europe), i’d highly encourage you to watch this session.
Breakout Session 3 – vRealize, Secure and extend your data center to the cloud suing NSX: A perspective for service providers and end users.
Session ID: HBC7830.
Presenters Thomas Hobika – Director, America’s Service Provider solutions engineering & Field enablement, vCAN, vCloud Proviuder Software business unit (VMware).
John White – Vice president of product strategy (Expedient).
This session was about using NSX and other products (i.e.
Zerto) to enable push button Disaster Recovery for VMware solutions presented by Thomas, and John was supposed to talk about their involvement in designing this solution.  I didn’t find this session content that relevent to the listed topic to be honest so left failrly early to go to the blogger desks and write up my earlier blog posts from the day which I thought was of better use of my time.
If you would like more information on the content covered within this sesstion, I’d look here.
Breakout Session 4 – Practical NSX Distributed Firewall Policy Creation.
Session ID: SEC7568.
Presenters Ron Fuller – Staff Systems Engineer (VMware).
Joseph Luboimirski – Lead virtualisation administrator (University of Michigan).
Fairly useful session focusing about NSX distributed firewall capability and how to effectively create a zero trust security policy on ditributed firewall using vairous tools.
Ron was talking about various different options vailablle including manual modelling based on existing firewall rules and why that could potentially be inefficient and would not allow customers to benefit from the versatality available through the NSX platform.
He then mentioned other approaches such as analysing traffic through the use of vRealize Network Insight (Arkin solution) that uses automated collection of IPFIX & NetFlow information from thre virtual Distributed Switches to capture traffic and how that capture data could potentialy be exported out and be manipulated to form the basis for the new firewall rules.
He also mentioned the use of vRealize Infrastructure Navigator (vIN) to map out process and port utilisation as well as using the Flow monitor capability to capture exisitng communication channels to design the basis of the distributed firewall.
The session also covered how to use vRealize Log Insight to capture syslogs as well.
All in all, a good session that was worth attending and I would keep an eye out, especially if you are using / thinking about using NSx for advanced security (using DFW) in your organisation network.
vRealize Network Insight really caught my eye as I think the additional monitoring and analytics available through this platform as well as the graphical visualisation of the network activities appear to be truely remarkeble (explains why VMware integrated this to the Cross Cloud Services SaS platform as per this morning’s announcement) and I cannot wait to get my hands on this tool to get to the nitty gritty’s.
If you are considering large or complex deployment of NSX, I would seriously encourage you to explore the additional features and capabilities that this vRNI solution offers, though it’s important to note that it is licensed separately form NSX at present.
Outside of these breakout sessions I attended and the bloggin time in between, I’ve managed to walk around the VM Village to see whats out there and was really interested in the Internet Of Things area where VMware was showcasing their IOT related solutions currently in R&D.
VMware are currently actively developing an heterogeneous IOT platform monitoring soluton (internal code name: project Ice).
The current version of the project is about partnering up with relevent IOT device vendors to develop a common monitoring platform to monitor and manage the various IOT devices being manufacured by various vendors in various areas. If you have a customer looking at IOT projects, there are opportunities available now within project Ice to sign up with VMware as a beta tester and co-develop and co-test Ice platform to perform monitoring of these devices.
An example of this is what VMware has been doing with Coca Cola to monitor various IOT sensors deployed in drinks vending machines and a demo was available in the booth for eall to see Below is a screenshot of Project Ice monitoring screen that was monitoring the IOT sensors of this vending machine.
The solution relies on an Open-Source, vendor neutral SDK called LIOTA (Little IOT Agent) to develop a vendor neutral agent to monitor each IOT sensor / device and relay the information back to the Ice monitoring platform.
I would keep and eye out on this as the use cases of such a solution is endless and can be applied on many fronts (Auto mobiles, ships, trucks, Air planes as well as general consumer devices).
One can argue that the IOT sensor vendors themselves should be respornsible for developping these mo nitoring agents and platforms but most of these device vendors do not have the knowledge or the resources to build such intelligent back end platforms which is where VMware can fill that gap through a partship.
If you are in to IOT solutions, this is defo a one to keep your eyes on for further developments & product releases.
This solution is not publicly available as of yet though having spoken to the product manager (Avanti Kenjalkar), they are expecting a big annoucement within 2 months time which is totally exciting.
Some additional details can be found in the links below [email protected] 2016 – http://blogs.vmware.com/euc/2016/08/internet-of-things-vmworld-2016.html.
LIOTA – https://www.vmware.com/ciovantage/article/liota-driving-iot-app-development.
#vRNI #vIN #VVD # DevOps #Push Button DR # Arkin Project Ice # IOT #LIOTA 6.2, , , VMware Validated Design, , , , vSphere, vSphere 6.0 Agile, Arkin, DEVOP7674, HBC7830, , LIOTA, , Project Ice, Push button DR, SDDC7578R, Service Providers, Vmware Validated Design, vRealize Network Insight, .

VVD VMware VSAN 6.2 Performance & Storage savings

April 20, 2016 4 Comments Just a quick post to share some very interesting performance stats observed on my home lab VSAN cluster (Build details here).
The VSAN datastore is in addition to a few NFS datastores also mounted on the same hosts using an external Synology SAN.
I had to build a number of Test VMs, a combination of Microsoft Windows 2012 R2 Datacenter and 2016 TP4 Datacenter VMs on this cluster and I placed all of them on the VSAN datastore to test the performance.
See below the storage performance stats during the provisioning (cloning from template) time.
Within the Red square are the SSD drive performance stats (where the new VM’s being created) Vs Synology’s NFS mount’s performance  stats (where templates resides) in the Yellow box.
Pretty impressive from all Flash VSAN running on a bunch of white box servers with consumer grade SSD drives (officially unsupported of course but works!), especially relative to the performance of the Synology NFS mounts (RAID1/0 setup for high performance), right?.
Imagine what the performance would have been if this was on enterprise grade hardware in your datacentre.
Also caught my eye was the actual inline deduplication and compression savings immediately available on the VSAN datastore after the VM’s were provisioned.
As you can see, to store 437GB of raw data, with a FTT=1 (where VSAN keeping redundant copies of each vmdk file), its only consuming 156GB of actual storage on the VSAN cluster, saving me 281GB of precious SSD storage capacity.
Note that this is WITHOUT Erasure Coding RAID 5 or RAID 6 that’s also available with VSAN 6.2 which, had that been enabled, would have further reduced the actual consumed space more.
The point of this all is the performance and the storage savings available in VSAN, especially all flash VSAN is epic and I’ve seen this in my own environment. In an enterprise datacenter, All Flash VSAN can drastically improve your storage performance but at the same time, significantly cut down on your infrastructure costs for all of your vSphere storage environments.
I personally know a number of clients who have achieved such savings in their production environments and each and every day, there seem to be more and more demand from customers for VSAN as their preferred storage / Hyper-Converged technology of choice for all their vSphere use cases.
I would strongly encourage you to have a look at this wonderful technology and realise these technical and business benefits (summary available here) for yourself.
Share your thoughts via comments below or feel free to reach out to discuss what you think via email or social media Thanks 6.2, Home Lab, HomeLab, , All Flash, All Flash VSAN, Erasure Coding, Inline COmpression, Inline De-dupe, Storage Savings, , VSAN, VSAN 6.2, VSAN compression, VSAN De-Dupe New VMware Product Availabilities – Now available to download.
March 16, 2016 Leave a comment VMware have just made a number of new product versions (mostly maintenance releases on few different products, including that of the much hyped VSAN 6.2) so a quick post to summarise the content that was released last night (15.03.2016) VMware VSAN 6.2 – VMware VSAN 6.2 was officially announced in early February with a number of cool new features such as Erasure coding but unless you were a techie trying to download the software, you may have not known that it was not available for download despite being announced.
That was until yesterday and the product is now available to download for every customer.
Install binaries here.
You need the vCenter Server 6.0 U2 and ESXi 6.0 U2, both of which were made available to customers yesterday.
VMware vRealize Automation 7.0.1 now released and available for download Release notes here.
Product binaries here.
Documentation here.
VMware vRealize Orchestrator 7.0.1 is released and available to download Release notes here.
Product binaries here.
Documentation here.
vRealize Business for Cloud (Old ITBMS offering) is also released and available for grabs now Release notes here.
Product binaries here.
Documentation here.
vRealize Log Insight 3.3.1 is released and available to download Release notes here.
Product binaries here.
Documentation here.
vCloud Suite 7.0 is also released and available to download (here) – This includes all of the above new versions of products plus the exiting versions for vSphere Replication 6.1 + vSphere Data Protection 6.1.2 + vROPS 6.2.0a + vRealize Infrastructure Navigator 5.8.5.
6.2, ESXi, vCAC, vCenter, , vRA, vRO, 6.2, 7.0.1, Download, New Products, vRA, vRA 7.0.1, vRealize Automation 7.0.1, vRealize Business for Cloud 7.0.1, vRealize Orchestrator 7.0.1, vRO, vRO 7.0.1, .

VSAN 6.2 VMware All Flash VSAN Implementation (Home Lab)

March 15, 2016 2 Comments I’ve been waiting for a while to be able to implement an all flash VSAN in my lab and now that VSAN 6.2 has been announced, I thought it would be time to upgrade my capacity disks from HDD’s to SSD’s and get cracking.! (note: despite the announcement, .

VSAN 6.2 binaries are NOT YET available to download

I’m hearing it would be available in a week or two on My VMware though so until then, mine is based on VSAN 6.1 – ESXi 6.0U1 binaries) As I already had a normal (Hybrid) VSAN implementation using SSD+HDD in my management vSphere cluster, the plan was to keep the existing SSD’s as caching tier and replace the current HDD’s with high capacity SSD drives.
So I bought 3 new Samsung 850 EVO 256GB drives from Amazon (here)                                        All Flash VSAN Setup.
Given below are the typical steps involved in the processes to implement All Flash VSAN within the VMware cluster (I’m using the 3 node management cluster within my lab for the illustration below) Install the SSD drives in the server – This should be easy enough.
If you are doing this in a production environment, you need to ensure that the capacity SSD’s (similar to all other components in your VSAN ready nodes)  are in the VMware HCL.
Enable VSAN on the cluster – Need to be done on the web client.
Verify the new SSDs are available & recognised within the web client – All SSD’s are recognised as caching disks by default..
Manually tag the required SSD drives as capacity disks VIA COMMANDLINE for them to be recognised as capacity disks within VSAN configuration – This step MUST be carried out using one the ways explained below and until then, SSD disks WILL NOT be available to be used as capacity disks within an all flash VSAN otherwise.
(There currently is no GUI option on the web client to achieve this and cli must be used) Use esxcli command on each ESXi server SSH in to the ESXi server shell.
Use the vdq -q command to get the T10 SCSI name for the capacity SSD drive (Also verify “IsCapacityFlash” option is set to 0).
Use the “esxcli vsan storage tag add -d -t capacityFlash” command to mark the disk as capacity SSD..
Use the vdq -q command to query the disk status and ensure the disk is now marked as “1” for “IsCapacityFlash”.
If you now look at the Web client UI, the capacity SSD disk will now have been correctly identified as capacity (note the drive type changed to HDD which is somewhat misleading as the drive type is still SSD).
Use the “VMware Virtual SAN All-Flash Configuration Utility” software – This is a 3rd party tool and not an officially supported VMware tool but if you do not want to manually SSH in to the ESXi servers 1 by 1, this software could be quite handy as you can bulk tag on many ESXi servers all at the same time.
I’ve used this tool to tag the SSD’s in the next 2 servers of my lab in the illustration below.
Verify capacity SSD across all hosts – Now that all the capacity SSD’s have been tagged as capacity disks, verify that the web client sees all capacity SSD’s across all hosts.
Create the disk groups on each host – I’m opting to create this manually as shown below.
Verify the VSAN datastore now being available and accessible.
There you have it.
Implementing all flash VSAN requires manually tagging the SSDs as capacity SSDs for the time being and this is how you do it.
I may also add that since the all flash VSAN, my storage performance has gone through the roof in my home lab which is great too.
However this is all done on Whitebox hardware and not all of them are fully on VMware HCL….etc which makes those performance figures far from optimal.
It would be really good to see performance statistics if you have deployed all flash VSAN in your production environment.
, -t, All Flash VSAN, capacityFlash, esxcli-, HCI, SSD, storage, tag, vdq, vdq -q, , VSAN, VSAN 6.2 Post navigation.
← Older Articles.

  • 0

While it started with The Night Before Demo

Advice My Reading List (2019).
Jan 27, 2020 Uncategorized No comments yet After college, I resolved to read one book a month.
It can be fiction, non-fiction, technical, business-oriented, or whatever as the goal was to always be absorbing and digesting new ideas and information , even just for fun.
In more recent years, I’ve generally tried to read 3 per month which works great with a Kindle app and […] Read More>> O Come, All Ye Startups.
Dec 19, 2019 Uncategorized No comments yet A long time ago, I wrote tech parodies of Christmas songs and poems.
While it started with The Night Before Demo , it continued with The Night before Christmas, and Working in a Startup Wonderland.
This year, I bring you a darker take on venture capital.
If you’re not familiar with the tune, start this video […] Read More>> How Unprofitable Companies IPO.
Jun 05, 2019 Startup Economics No comments yet Disclaimer: In this post, I describe business models of tech companies where I hold positions.
Don’t use this to draw conclusions about any specific company or rationalize any particular investment.
Follow your own investment strategy, not some knucklehead with a blog.
In the last few years, we’ve seen a variety of major tech companies go […] Read More>> Let’s be honest.
Startups suck.
May 31, 2019 Startup Economics 3 comments Every day, you put yourself and your ideas out there and get shot down.
If you’re bad at it, you get shot down again and again until you finally give up.
If you’re good at it, you get shot down again and again until you get to something that works.
Fundamentally, they’re not that different .
[…] Read More>> The Draw of Marvel Movies or Why DC Movies Suck.
Mar 28, 2019 Uncategorized 2 comments I’ve enjoyed comic books since I was a little kid.
I grew up on Super Friends, and Spider-Man and His Amazing Friends, and Adam West’s Batman.
A few years later, I moved to the big screen adaptations of Batman, Spider-Man, and numerous others.
The stories of epic adventure, the triumph of good over evil, and […] Read More>> That Conference Panel on Entrepreneurship.
Feb 05, 2019 Advice No comments yet Late last summer, I participated in a panel discussion at my favorite conference of the year: That Conference.
Wait, which conference.
That Conference.
Now that we have that out of the way, let’s dig in.
You can catch the full video here but first some background.
If you listen closely, you’ll realize that we have […] Read More>> My Reading List (2018).
Jan 07, 2019 Uncategorized No comments yet After college, I resolved to read one book a month.
It can be fiction, non-fiction, technical, business-oriented, or whatever as the goal was to always be absorbing and digesting new ideas and information, even just for fun.
More recently, I’ve generally tried to read 3 per month which works great with a Kindle and a ton of […] Read More>> On Job Titles at Startups.
Dec 18, 2018 Advice No comments yet I’ve worked for organizations of every size, from being employee #1 to starting at #25 to a massive US federal department.
Further, I’d advised companies starting from a single founder to a couple hundred employees.
From being on every side of that, one of the things I’m most sensitive to is job title but probably […] Read More>> OAuth 2.0 Scopes: A Thought Experiment.
May 30, 2018 Developer Experience, Proposals 2 comments OAuth 2.0 (RFC 6749) is a great authorization framework but it leaves much up to the imagination.
Luckily, there are numerous extensions that expand, explain, and clarify the basic capabilities to build a robust and powerful suite of standards. That said, .

There’s one unobviously complex area which gets little attention: Scopes

What is an OAuth Scope.
[…] Read More>> Building Version 1 is Dangerous.
Apr 23, 2018 Advice No comments yet Building software is hard.
You have all the fun of being excruciatingly specific in certain situations, incredibly general the rest of the time, and expressing all of it in a language that isn’t your own.
In the context of a new product – whether it’s for a startup or an established company – it’s even […] Read More>> 1 2 3 … 71 ».

  • 0

No shooter came close to this game until TimeSplitters

TimeSplitters Retro Roots.
TimeSplitters Retro Roots.
Like many of you I’ve been trapped inside due to the coronavirus lockdown for over a month now.
I feel it’s time to take a long needed break from development of and get all nostalgic about a PlayStation 2 classic… The original TimeSplitters by Free Radical and Eidos.

Check out ‘ Timesplitters 1 PS2 – Longplay 100%’ by Loopy Longplays MY FIRST PS2 GAMES

During my second year studying Biology at university I blew a portion of my student loan and summer work money on the recently launched PS2.
I remember it arriving and I eagerly jumped on the bus to the nearest Game store to pick up a game.
There really wasn’t much out at the time and the 3 games which I bought were new franchises and could easily have been terrible.

The games were Shadow of Memories

The Summoner, and Time Splitters.
All three were brilliant games but Time Splitters was the game I returned to time and time again.
A duck.
One of the crazy characters from TimeSplitters I never owned a Nintendo 64 and played Golden Eye at a friend’s house.
The game was absolutely amazing both in terms of the challenging single player and the frantic 4 player death matches.
No shooter came close to this game until TimeSplitters.
TimeSplitters offered that fast paced challenging gameplay and fun filled 4-player death matches (if you had a multi-tap) that I had been longing for since resigning as Bond.
The game forced you to master the control s and map the levels in your head.
It was unforgiving and in order to unlock all the character s you had to play, die, learn, and rinse and repeat.

Completing some of the ‘Story Mode’ levels on hard was ridiculous

It required learning the position of every enemy and knowing the levels inside out.

The enemy AI was relentless and really forced you to get good fast or die

I never fully understood the game’s story but it was something to do with objects in various times and locations that the ‘evil’ time splitters (strange zombie like monsters) wanted.
You would run through a level, shooting anything that moved searching for the object.
Once you found the object you had to get it to the exit but that was made harder by loads of time splitters warping in around you.
Each Story Mode level that you completed within the time limit unlocked a new character or cheat usable in the multiplayer.
Behead the Undead challenge in TimeSplitters on PS2 Once you’d completed ‘Story Mode’ ‘Challenge Mode’ unlocked.
This mode was a set of crazy challenges such as throwing a certain number of bricks through windows in a time limit.
It was great fun and one of my favourite challenges was ‘Behead the Undead’ which is pretty self explanatory.
The ‘Arcade mode was where the multiplayer action was at.
I’d often borrow a friends multi-tap and play 4 player death matches with my uni buddies.

It brought back memories of 4-player Golden Eye from an even more retro era


As a game developer I learnt a lot from TimeSplitters

TimeSplitters creates fun challenging gameplay taking the focus off the story

This is something I did with Space Blaster

Although Space Blaster is a lot smaller than TimeSplitters I still added loads of challenges.
The challenges force the player to learn the game in the ‘play, die, learn cycle’.

Some might say that other games in the TimeSplitter series are better games

I would probably agree in some ways.

But TimeSplitters is where it all started

it kept it simple.
It focused on gameplay and clever challenges.
That is what kept me coming back for ‘just one more go’, and that is what I absolutely loved about it.
Well I’m off to continue developing the The Flawless: Art’s Tale.

To keep up to date with everything BKD join the Discord or follow us on ,

#RetroRoots Ste Wilson is a director, game developer, and programmer at Bare Knuckle Development Ltd.

When not coding away on BKD games he can be found playing video games on console and PC

He also makes music under the music maker name of ‘ and loves playing guitar, writing tunes, and producing music.
PlayStation 2.
Retro Gaming.
Time Splitters.
We’re Supporting the NHS with Games for Carers Space Blaster Lockdown High-Score Challenge.
Leave a Reply Cancel reply.
Required fields are marked Name Email Current [email protected] Leave this field empty I consent to my email address being collected via this form.
We respect your privacy and take protecting it seriously, read our for more information.
Copyright © 2020 Bare Knuckle Development Ltd |.

  • 0


x Search for: September 4, 2020.
I’ve worked on a number of games which you will find in the games menu above.
At present, I’m mostly maintaining PZL, casually developing with Ogre Game Kit and trying various game engines.
Games I’ve worked on:.
Cassini Division   Space combat.
Global Warfare   Half-Life modification.
My First Planet  ???.
PZL  iOS puzzle game.
Speed Games  Several small games in various states.
Treebles  iOS platformer/puzzler.
Latest Posts.
© Alex Peterson.

  • 0

How to Build a Successful YouTube Gaming Channel

Author Why social news sites like N4G and reddit are failures.
Facebook Twitter Reddit Tumblr StumbleUpon Digg Linkedin email I’m an artist just like anyone else.

So why am I persecuted just because my paintbrush is Final Cut Pro and After Effects

and my canvas is YouTube.
In the past I’ve written about mod abuse at gaming sites that claim to be “social news” (those that allow users to submit info) before on my YouTube marketing blog, .

But now I’m adding a video detailing my recent clash with two N4G mods

Before watching this video, I just want to say this: I have an insanely strong sense of justice which causes me to clash with those who don’t.
Especially when it is about something that directly impacts my daily life.
Because I’m a professional online video creator who doesn’t have a huge network promoting his channel, .

Being able to share my YouTube videos into communities is of critical importance

It’s ridiculously difficult to build an audience on YouTube using third party websites like reddit and N4G, and I’ve just reached the end of my rope dealing with people that honestly aren’t any more qualified to be moderators than I am.
Anyway, I produced a news video as part of my channel’s RPG Report segment and then went to submit it to N4G.
Seems simple enough, right.
To express just how absurd the whole situation is, I made this video using screen capture software to guide you through how far down the rabbit hole this goes.
And here are some screenshots of additional conversations taking place in the hours after I created this video: The worst part of this.
Selective enforcement of the made-up rules.

This girl posted her YouTube video at the same time I did

and her submission wasn’t auto failed by any of the mods.

They are so adamantly opposed to my submission on the basis that its a YouTube video

but it’s perfectly okay for someone else to do it.
And you CAN’T suggest her video slipped through their radar.
I let the mod know about her video and he said it was perfectly okay.
Yeah, that’s right: Lt.
Skittles once again auto-failed my post submission before anyone could vote on it.
It was at this time I looked directly below my rejected post and saw this, .

Rows of podcast and video submissions

Of course, because my post was submitted six hours ago it is like 50 something pages in the back.

At this point any hope I have of the N4G community seeing my video is gone

It’s buried under a mountain of stuff that didn’t get any votes in the past six hours.
It’s hard to be respectful when a simple act of submitting a link to a website takes six hours and a lot of back and forth just to get someone to watch the video.
Or to get told you aren’t considered an “industry professional” despite the years of work you’ve put into the industry.
And this mod comes across to me like the N4G community is like some kind of cult; like it’s the norm for mods to beat you down into submission and convert you to their way of thinking.
And this is where the story ends, .

Because I’ve wasted enough time on N4G

Their mods don’t respect the users and invent esoteric rules that aren’t mentioned on the website rules section but everyone is expected to psychically know them.
His statements demonstrate just how much they only care about their own clique.
What I believe is actually going on:.
I wish I knew.
I’d like to think there is some amount of favoritism, but the truth is I don’t have the time or energy to analyze the failed submissions versus the approved to find where all the dots connect.
What I do know is no social news website should be operating this way.
Why does any of this matter?.

In order to get your YouTube videos discovered you need to build lots of awareness

Because the ad revenue generated per monetized video view doesn’t earn enough money to justify traditional forms of marketing (like pay per click ads) you need to share your video on websites like reddit and N4G that accept submissions.
This is just a fact of life.
But when the mods block people from doing so for reasons that seem designed purely to discourage you, it makes it nearly impossible for creators to get discovered by audiences.
And you can’t say video creators are any less entitled to show their work than people who submit re-hashed memes, knitted dolls of game characters or hand-forged swords.
It takes a huge amount of time and creative ability to produce high-quality videos.
I’m an artist just like anyone else, so why am I persecuted just because my paintbrush is Final Cut Pro and After Effects, and my canvas is YouTube.
You also can’t expect videos on YouTube to be magically discovered; anyone who has built a large audience has leveraged sites like N4G and reddit to find viewers.
The problem is, in recent years the volunteer mods of these sites have hampered down on it, and basically fight anyone who exists outside the established cliches they belong to.

This is exactly why I made the Kickstarter for my own gaming social news site

rpgfanatic.net  The system is designed to not require any moderators (certainly not volunteer moderators) using gamification.
I can’t control how others run their websites, but I can choose to make a better alternative.
I hope the project gets funded so I can finally get the site in the state it’s needed to be in for the past two years.
It’s been sitting half-finished for much too long.

Martell TV is my focus right now

but I really believe a site like rpgfanatic.net is desperately needed in the gaming community, and the success of the site could inspire other websites to change their ways.
(Hey, a guy can dream, can’t he?) Facebook Twitter Reddit Tumblr StumbleUpon Digg Linkedin email.

Trip Update: Our App Works on Sony Smart TVs

How to Build a Successful YouTube Gaming Channel.
8,056 UniqueVisitorsPowered By Learn more:.

  • 0

A good example is the State of Siege title Cruel Necessity


#5gkb…of Solitaire Statements and Draw-cup Adjustments (Tom Russell)

#5gkb…of Solitaire Statements and Draw-cup Adjustments (Tom Russell)

ois extremely proud ro present an incredible insight into the working mind of Hollandspiele and its philosophy towards the design of strategic games for one.


I was talking on the phone with a game designer.
Like every other conversation I’ve had with that designer, it was wide-ranging and free-wheeling, covering a number of gaming-related topics.
I happened to mention something about solitaire wargames and how well they sold for the industry in general, and for us in particular.
After having seen over a dozen of my wargame designs hit the market.

The first one to really make a big splash was Agricola

Master of Britain, and at that time, it was the best-selling game that I had ever designed.
courtesy of Katie’s game Corner “I’m really sorry to hear that,” said the designer.
“Uh, what?” “I’m not sorry it did well, of course, .

I’m glad for you and Mary,” he said

“It’s just sad that solitaire games are so prevalent.
The whole point of board games is to share the experience with another person.
If I wanted to play a game by myself, I’d just play a video game.” The funny thing is, prior to converting my Agricola from an unsuccessful two-player game into a working solitaire one, my own view of solitaire-only board games probably wasn’t much different than that designer’s.
I had no problems playing on, if you’ll pardon the phrase, both sides of my table, and thus playing a 2P game solo.
I didn’t even have a problem playing that other Agricola solitaire when I felt like farming and Mary didn’t.
But I was always a little wary of dedicated solo-only wargames.
Like that designer friend of mine, if I wanted to play a game designed for one player, I’d just play a video game, but it was less because of any philosophical underpinnings about “the whole point of board games” being the shared nature of the experience, and more because I felt that a video game would offer me a richer and more compelling experience that rewarded (or punished) my strategies and prized my agency.
In short, a strategy computer game made me feel like my decisions mattered.
I was not convinced that this would be true in a solitaire-only wargame.
It didn’t help that none of the solo wargames I heard about seemed to have much strategy at all.
B-17, Queen of the Skies seemed to be all about passively experiencing an emergent story over which you had little control.
Ambush seemed to be a slightly more complicated version of Choose Your Own Adventure, with limited opportunities for true interaction.

The popular State of Siege series appeared to be essentially and ironically stateless

with the advancement of enemy units being dependent on a card draw, and your ability to beat them back dependent on good die rolls – a game more of luck than skill.
Now, before I go any further, let me say that these opinions were knee-jerk reactions (perhaps with the emphasis on the “jerk”) of someone on the outside, looking in and askance.
They are not necessarily “factually accurate”, and often there is much more to a game than first appears.
A good example is the State of Siege title Cruel Necessity, which helped change my mind about solo-only games.
The game has levers that the player can pull to give himself decisive advantages.
Now, I’m rubbish at pulling those levers, and I often felt like I was still at the mercy of the next card, and that the die provided by the publisher was either defective or possessed by a vengeful and vindictive spirit as it always came up precisely one pip short of what I needed.
It was still more a game of luck than skill, but there was more to it than I had initially thought, and there was also more potential for solitaire wargames to have meaningful decisions.
This general impression was aided by exposure to some solitaire designs by the intrepid team of Hermann Luttmann and Fred Manzo, and around this time I had decided to take another look at my long-gestating Agricola, Master of Britain.
I’m not going to bore you with all the details, primarily because I’ve already written about the creation of that particular game (and its spiritual sequel, Charlemagne, Master of Europe) but I will say that in general, my primary goal in designing these solitaire games was to put a greater emphasis on player agency.
Like my favorite abstract backgammon, there is luck involved, sometimes a lot of luck, but if your victory or defeat hinges upon a die roll going well, then you’re doing it wrong.
They are games of strategy, planning, and risk mitigation.
The core of the two games of course is the cup adjustment mechanism, which Mary tells me I need to come up with a better name for, as “cup adjustment” sounds somewhat risque.
Enemy units that aren’t on the map exist in a Friendly, Unfriendly, or Hostile cup, reflecting their general attitude toward what the player is doing.
When you take an action, you blindly move chits from one cup to another – actions that people like make them friendlier, actions they don’t, make them less so.
On a systemic level, your actions take on a sort of equilibrium, driving the game state, and directly impacting the feel of the late game.
This has the consequence of the game becoming easier on a tactical level when you’re doing well strategically, and harder when you’re doing poorly.
Long-term building projects you place on the board will also determine which cup eliminated enemy units go into at the end of the turn, which means that certain regions will gradually get quieter and easier to rule once you’ve put in the time and effort to pacify them.
The key in all this, and the intention behind it, is that your decisions have ramifications both immediate and long-term, and that the game state evolves in reaction to your playstyle.
I would contend in fact that it does this in a way that’s roughly analogous to the way in which another player changes what she’s doing in reaction to your playstyle.
I say “roughly analogous” because there is no other player.
A game system can only be a poor facsimile of human intelligence, so I think solo-only games are better at representing a diverse host of disorganized, disunited socio-political entities pursuing their own inscrutable goals.
An important part of the player having agency and making meaningful decisions is that she is the attacker, and tasked with achieving something of consequence.
Many solitaire games tend to put the player in the role of the defender, striving only to beat back the inexorable horde outside the gates.
So much so that years ago, when I was still skeptical of the appeal of solo-only games, a publisher told me that doing a solo game was easy – take a situation where one side is on the ropes and hopelessly overwhelmed: that’s the player, the system is the attacker, and it almost always wins.
Even as I’ve come around on solitaire wargames, and even as we’re going to publish a few of them, in general I’m still very wary of “overwhelmed defender struggling to stay afloat” State of Siege style games.
Every once in a while, someone sends us a solitaire game along those lines that captivates us by doing something new and exciting with the formula.
Brad Smith’s NATO Air Commander gives players a lot of options and flexibility in how they approach the air-based defense of Europe in this Cold War goes hot game set in the eighties.
The player must choose between short-term tactical objectives and long-term strategic missions that can tip the odds into the player’s favor.
Robert DeLeskie’s Wars of Marcus Aurelius uses CDG-style mechanisms, focusing on card angst and resource management as you fend off the literal barbarians at the gates.
In both of these games.

Despite the shades of State of Siege

players are encouraged to take an active role, their decisions matter, and the game state is mutable.
As a publisher, we receive a lot of solitaire submissions, and the vast majority of them fall into the “desperate defender” camp, and, I’m sorry to report, very few of them sustain any strategy beyond “roll high”.
The game is stateless, your decisions negligible.
It got to the point that we actually changed our submission guidelines to say, Please.
For the love of God.

Stop sending us State of Siege games

That’s not to say there isn’t a market for these “player is merely along for the ride” sort of games.
Clearly there is.
Such games can be thematic, and, at least for an hour or so, pleasantly diverting.
But solitaire games are capable of more than that.
I want more than that, and I think many gamers do as well.

If you would like to help support the BSoMT website

please feel free to buy me a coffee at or pop over to Hollandspiele on Twitter: Tweets by hollandspiele Hollandspiel Website: https://hollandspiele.com/ oarticle on Agricola Master of Britain #6solo…of Sodden Legions and Tribal Rebellions.

One thought on “#5gkb…of Solitaire Statements and Draw-cup Adjustments (Tom Russell) ”

Liz (Beyond Solitaire) 01/02/2018 at 14:41 I got a shipping notice for Agricola: Master of Britain this morning.
Very much looking forward to giving it a try.
Thanks for the insightful post.
Like Reply.
Leave a Reply Cancel reply.
Enter your comment here.
Email Name Website You are commenting using your WordPress.com account.
(  /   ) You are commenting using your Google account.
(  /   ) You are commenting using your Twitter account.
(  /   ) You are commenting using your Facebook account.
(  /   ).
Follow BSoMT UKGE 2018 UK Games Expo 2018 Assembly Kickstarter Launch Assembly Kickstarter Dwarven Traders (2d6EE Games) Kickstarter Launch Dwarven Traders Kickstarter Launch Fella, Destroyer of Worlds Both Sides of My Table Unforseen event devistates the island.
Velcro the boardgame cat.
Post to Send to Email Address Your Name Your Email Address Post was not sent – check your email addresses.
Email check failed, please try again Sorry, your blog cannot share posts by email.
bloggers like this:.