Fixel is a technology company specialised in Digital Transformation and DevOps Solutions based in Istanbul. Aim of Fixel is to revolutionise the companies with new technologies that prioritise the customer experience. It approaches every project with the latest methodologies and collaborates closely with the clients to flawlessly execute top-tier solutions.
Playall is a financial game. In this game you can learn about financial terms and solve quizzes to improve yourself in the field of finance.
We wanted to create a financial game with the lowest cost and without server side operations for Ak YatırımMenkul Değerler A.Ş. Fixel’s DevOps Team has decided to use a serverless approach for this project for these reasons.
Why Serverless ?
Ak Yatırım Menkul Değerler A.Ş is one of the largest investment company in Turkey. Our team would not to worry about the system’s operability and how much traffic it would get.
Our Serverless Architecture on AWS
In this project we used Amazon Cognito for authentication processes. Users can play without registration or SignIn via Facebook.
We used Amazon DynamoDB for store questions and answers, AWS Lambda for retrieve data and check answers.
We created Amazon API Gateway for request management and security.
At the end of the day we don’t worry about the scalability of the architecture on AWS.
Benefits of AWS on this project:
In this project, Ak YatırımMenkul Değerler A.Ş met all requirements by using AWS and run the entire workload without the need to use any server, database or other security product for PlayAll Game.
In this post, I am going to demonstrate a two-step Scala implementation of Radial Basis Function Neural Network (RBFNetwork): (unsupervised) k-means clustering first, and (supervised) gradient descent second. This two-step implementation is fast and efficient compared to Multilayer Perceptron while providing good predictive performance. This is because unsupervised learning at the first step provides information about data distribution so that the second step can have an intuition about the data and fine-tune the model.
For us, most of the Machine Learning models that we use today are just black box approaches that we take from some library. It is good to have this ability for faster development; however, it would be nicer to know internal dynamics of their implementations. Therefore, I am hoping that this post will be simple enough to provide you that information.
Have you ever need to access your servers in virtual private cloud environment outside from “safe-zones”? Moreover, does your team willing to login on those machines from almost anywhere at the least appropriate times? And all this requires Volvo-Class security? Well, we got you covered.
Since this article focus on AWS, our challenge comprehends Two major concepts.
Enabling the content of landing pages via S3 endpoint in addition to IP white listing.
Making sure no one routes internet traffic through the cloud network yet without losing connectivity.
Before telling more, let’s assume a basic scenario; you’ve launched a t2.nano instance & started an openvpn server (with docker or not) and you created a security rule which allows your openvpn instance’s Elastic IP address to any machine in the vpc as well as your S3 buckets. Sounds great but not very ideal. The thing is, AWS doesn’t cost a penny for the network traffic inside your VPC. So from going one EC2 to another you have to reach it over internet. That means both cost and speed issues. Same rule applies for your buckets surely. Not even mentioning that you are also providing kind of an open source ZenMate solution to all your users because whatever they download or upload goes through your network and your very own encrypted connection. And that is something no system administrator wants to be responsible for.
So what is the exact solution? Obviously it isn’t to block outgoing traffic for 0.0.0.0/0 and I can assure you, we have tried countless approaches from config files to kernel parameters. Meet the most elegant way now, OpenVPN-AS
Tired of having your changing dependencies slow you down? Gradle 3.1’s composite builds provide you with a process of changing and refreshing your dependencies.
Our journey with Gradle started one and half years ago with this presentation. Everybody in the company just loved its expressive and easy-to-understand structure and realized that a plugin model was the wrong level of abstraction. Instead, language-based approaches were the right one in terms of their flexibility for the long term.
So it didn’t take long for almost 50 members of the development team to change the whole build infrastructure with Gradle. Gradle doesn’t just throw away the foundation that other build tools brought. Instead, it builds up easily and more powerfully on top of others while remaining 100% compatible with them. Therefore, this made Gradle not an alternative but an upgrade for us.
While we were migrating projects, simple pom.xml projects required only a few steps, such as; “gradle init,” but others needed more effort. Everything was good with Gradle except for one thing — “composite builds.”
And finally, our most-requested feature came out with Gradle 3.1, and now we love Gradle more than ever.
This article is more like a proof of concept of an implementation of microservice development, based on a real world implementation of a specific project build on top of integration of two stock markets, Nasdaq and BIST. The project requires different type of approaches and know-how which can be grouped into two categories, stream data api implementation and data analysis. Both sides have different challenges as they have different requirements. Read more →
AI and deep learning are transforming the way we understand software, making computers more intelligent than we could even imagine just a decade ago. It is the technology behind self-driven cars, intelligent personal assistant computers, and decision support systems. Deep learning algorithms are being used across a broad range of industries. As the fundamental driver of AI, being able to tackle deep learning with Java is going to be a vital and valuable skill, not only within the tech world, but also for the wider global economy that depends upon knowledge and insight for growth and success.
Before the containers have been integrated with the applications, the RoR application’s deployments are managed either manually or maintaining more handy tool such as Capistrano. Either way, there are a couple of required procedures needs to be applied on every new change set of the source code wanted to be deployed as a version.
Administration point of view, RoR applications are threated as file based applications, similar to PHP. Unlike Java or Go, there is not one binary/archived deployable artifact. Therefore, every single change set contains several files and directories which leads the deployment ( directly or indirectly ) a process between SCM and the destination servers. Capistrano handles this quite well, especially In case of any deployment error, there is this automated rollback capability which tries to keep all the target nodes in the cluster on the same version. Read more →
Type-interference is a great programming feature that helps coders write clean, readable code in a reasonable amount of time. Learn more here.
Writing type-safe language while maintaining less boilerplate code is an important aspect of programming languages in terms of developer’s productivity. Because type-safe code is less error-prone and less boilerplate code leads to more readable code, both together means reduced development time. Type-inference is a great programming language feature that maintains this balance.
Developers usually read more often than write. Thus, even if the source code will end up being processed by the computer, most of the time our focus is putting it into more human-readable form. At some point, we pay attention to the UX principles, like:
Humans have a limited attention span, so source code should help to spend this attention wisely. Information comes at a cost, so the longer the code is, the more overwhelming it is to read.
Sometimes, less means more. Short code may look brilliant, but it amplifies the time that others must spend on it. We should provide just enough information. Implicit values, if we do not abuse them, are the fix for this.
Vert.x can lend a hand with helping your microservices find each other. See how to get it set up and what it can do for your software.
Remember the Unix philosophy “Do one thing and do it well?” That is the philosophy of microservices. In software development, it is a common practice that when the same functionality is seen in the different parts of the application, it is abstracted away as another component.
This article shows you how to launch Vert.x, the toolkit for creating reactive apps on the JVM, in a dynamic way.
Vert.x started back in 2011 and it was one of the first projects to push the reactive microsystem model of the modern applications that need to handle a lot of concurrency. Since back then, people have developed best practices from writing good quality code using Rxfied Vert.x, RxJava’s Observable, and JoinObservable to its deployment using Docker, Kubernetes, or Swarm. Vert.x does not restrict developers to obey certain rules and standards, therefore, it is a better fit for our current Agile environments and Lean Entreprises. Thus, Developers like us, who are keen on freedom, can try new ways of doing things. With that in mind, we did not want to launch our microservices in statically defined ways. So in this article, I want to introduce how we launch the Vert.x in a dynamic way and in the coming days we want to publish series of articles about how we use brand new methods related to things like service discovery and deployment.