Have you ever need to access your servers in virtual private cloud environment outside from “safe-zones”? Moreover, does your team willing to login on those machines from almost anywhere at the least appropriate times? And all this requires Volvo-Class security? Well, we got you covered.
Since this article focus on AWS, our challenge comprehends Two major concepts.
Enabling the content of landing pages via S3 endpoint in addition to IP white listing.
Making sure no one routes internet traffic through the cloud network yet without losing connectivity.
Before telling more, let’s assume a basic scenario; you’ve launched a t2.nano instance & started an openvpn server (with docker or not) and you created a security rule which allows your openvpn instance’s Elastic IP address to any machine in the vpc as well as your S3 buckets. Sounds great but not very ideal. The thing is, AWS doesn’t cost a penny for the network traffic inside your VPC. So from going one EC2 to another you have to reach it over internet. That means both cost and speed issues. Same rule applies for your buckets surely. Not even mentioning that you are also providing kind of an open source ZenMate solution to all your users because whatever they download or upload goes through your network and your very own encrypted connection. And that is something no system administrator wants to be responsible for.
So what is the exact solution? Obviously it isn’t to block outgoing traffic for 0.0.0.0/0 and I can assure you, we have tried countless approaches from config files to kernel parameters. Meet the most elegant way now, OpenVPN-AS
Tired of having your changing dependencies slow you down? Gradle 3.1’s composite builds provide you with a process of changing and refreshing your dependencies.
Our journey with Gradle started one and half years ago with this presentation. Everybody in the company just loved its expressive and easy-to-understand structure and realized that a plugin model was the wrong level of abstraction. Instead, language-based approaches were the right one in terms of their flexibility for the long term.
So it didn’t take long for almost 50 members of the development team to change the whole build infrastructure with Gradle. Gradle doesn’t just throw away the foundation that other build tools brought. Instead, it builds up easily and more powerfully on top of others while remaining 100% compatible with them. Therefore, this made Gradle not an alternative but an upgrade for us.
While we were migrating projects, simple pom.xml projects required only a few steps, such as; “gradle init,” but others needed more effort. Everything was good with Gradle except for one thing — “composite builds.”
And finally, our most-requested feature came out with Gradle 3.1, and now we love Gradle more than ever.
This article is more like a proof of concept of an implementation of microservice development, based on a real world implementation of a specific project build on top of integration of two stock markets, Nasdaq and BIST. The project requires different type of approaches and know-how which can be grouped into two categories, stream data api implementation and data analysis. Both sides have different challenges as they have different requirements. Read more →
After trying several different approaches, we came up with what we think is the most elegant way of integrating Docker into our build tool Gradle.
Docker and Gradle have been around for a while, and there are many tutorials, blog posts, etc. related to best practices. After trying several different approaches, we came up with what we think is an elegant way of integrating Docker with our build tool Gradle. What follows is a simple and elegant integration of the two technologies.
First, let’s simply start with why we chose and how we can use Gradle’s Application plugin without getting into Docker yet. The Application plugin works hand in hand with Groovy, Scala, and Java plugins to create an executable JVM application. Using the Application plugin itself also implies application of the Distribution plugin. So, when it comes to deciding what plugin to use for making executables, the Application plugin is the most official way of doing things.
Before the containers have been integrated with the applications, the RoR application’s deployments are managed either manually or maintaining more handy tool such as Capistrano. Either way, there are a couple of required procedures needs to be applied on every new change set of the source code wanted to be deployed as a version.
Administration point of view, RoR applications are threated as file based applications, similar to PHP. Unlike Java or Go, there is not one binary/archived deployable artifact. Therefore, every single change set contains several files and directories which leads the deployment ( directly or indirectly ) a process between SCM and the destination servers. Capistrano handles this quite well, especially In case of any deployment error, there is this automated rollback capability which tries to keep all the target nodes in the cluster on the same version. Read more →
Type-interference is a great programming feature that helps coders write clean, readable code in a reasonable amount of time. Learn more here.
Writing type-safe language while maintaining less boilerplate code is an important aspect of programming languages in terms of developer’s productivity. Because type-safe code is less error-prone and less boilerplate code leads to more readable code, both together means reduced development time. Type-inference is a great programming language feature that maintains this balance.
Developers usually read more often than write. Thus, even if the source code will end up being processed by the computer, most of the time our focus is putting it into more human-readable form. At some point, we pay attention to the UX principles, like:
Humans have a limited attention span, so source code should help to spend this attention wisely. Information comes at a cost, so the longer the code is, the more overwhelming it is to read.
Sometimes, less means more. Short code may look brilliant, but it amplifies the time that others must spend on it. We should provide just enough information. Implicit values, if we do not abuse them, are the fix for this.
Vert.x can lend a hand with helping your microservices find each other. See how to get it set up and what it can do for your software.
Remember the Unix philosophy “Do one thing and do it well?” That is the philosophy of microservices. In software development, it is a common practice that when the same functionality is seen in the different parts of the application, it is abstracted away as another component.
This article shows you how to launch Vert.x, the toolkit for creating reactive apps on the JVM, in a dynamic way.
Vert.x started back in 2011 and it was one of the first projects to push the reactive microsystem model of the modern applications that need to handle a lot of concurrency. Since back then, people have developed best practices from writing good quality code using Rxfied Vert.x, RxJava’s Observable, and JoinObservable to its deployment using Docker, Kubernetes, or Swarm. Vert.x does not restrict developers to obey certain rules and standards, therefore, it is a better fit for our current Agile environments and Lean Entreprises. Thus, Developers like us, who are keen on freedom, can try new ways of doing things. With that in mind, we did not want to launch our microservices in statically defined ways. So in this article, I want to introduce how we launch the Vert.x in a dynamic way and in the coming days we want to publish series of articles about how we use brand new methods related to things like service discovery and deployment.
See how Vert.X, its Service Directory component and its eventbus work to get services talking to one another on both single or multiple JVMs.
In my previous article, I explained the Service Discovery in Vert.x and introduced an example of transparent remoting using Service Discovery. Transparent Remoting is a remote method invocation that looks like a local method invocation. In the other words, we have a plain Java interface and its proxy implementation at the client side. In the meantime, we have stub at the server side, where the actual implementation runs. With Service Discovery in Vert.x, we can obtain service references using its service name so that we no longer need to care whether a service runs locally or remotely. However, Location Transparency in Vert.x is a very important topic, and I am going to explain it in detail in this article.