Hypercompetition: Why it’s not Profitable?

blue targets and arrow

many blue targets and three arrows hitting the center of the first one

Hypercompetition coined by Richard D’Aveni considers competitive analysis under conditions in which competitive advantages are quickly negated. In addition, guiding policies are repeatedly flouted by iconoclastic rivals competing in border-less boundaries, and customer loyalty is constantly fickle. Can we have hypercompetition between two industries? Yes, because hypercompetition depends upon the mind-set and actions of hypercompetitive partners not on number of competitors.

Temporary advantage is with one who is slightly ahead of competitors, it is the one who breaks the silence first similar to prisoner’s dilemma. It always needs action and that creates issue on profit level for an organization, mostly hypercompetitive partners rely on margins and at times cannot withstand with the demand of innovation and R&D. Oligopolistic cooperation does not produce excess profits because of easy barriers to entry (Porter’s 5). Firms mostly compete with each other in four arenas and they are cost, quality, timing and know-how, strongholds, and deep pockets. In cost and quality arena, the trade-off between price and quality is eliminated, forcing the industry’s product offerings toward a point of ultimate value for consumers, combing low prices with high quality. This can only give minimum of marginal profit scavenging options for hyper competition. So, to overcome these most competitors either redefine quality and increase prices of products/services or force competitors towards timing and know-how arena.

The one who breaks the silence requires a uniquely different set of skills than that of followers, and that confers several advantages. The competitive moves can be easing through the know-how creating/altering resource base with speed, agility and timings. It also escalates when one competitor goes for creating a new resource base or transform strategy to imitate/replicate the one with competitor (let’s say successful competitor). This whole notion is called leapfrog strategy, and it is suggested that it is expensive, and competitors can imitate quickly, and competitions try to create strongholds. Under hypercomeptition, entry barriers can easily be circumnavigated. When Stronghold erodes through the entry of increasing rivals, the rivalry shifts to who can outlast the others based on their deep pockets. Big fishes as I’ve mentioned in my previous article in superior financial conditions can neutralize hypercompetition through tatics such as strategic alliances, acquisitions, franchising, niching, and swarming (moving is large numbers).

Hypercompetition can be in multiple arenas, or it could have stuck in a particular arena for a long period. Here comes proper project management, suppose the delay between moves are longer, and this delay is purposefully done so that enough resource can be produced to give a slack in the project for a recoup. Perfect competition gives profits significantly but hypercompetition doesn’t give profits, and if some then it’d be temporarily until the firm’s advantages are neutralized or eroded by other competitors.

Hypercompetion takes following points as assumptions:

  1. Firms mostly destroy their own competitive advantages and create new-new products. They deliberately cannibalize their own leading product before it goes through full product life cycle. Moving to new products needs capital for innovation, production costs etc decreasing margins.
  2. The one who are determined will likely to break the barrier but exiting barriers only provide a false sense of security, lulling incumbents into complacency.
  1. Consistency and logical thinking are the easiest thing to understand in competition. So, it is always advisable to be unpredictable.
  1. Long-term planning will only help to sustain and exploit existing advantage. Hypercompetitive is all about eroding existing advantage of competitors.
  1. From SWOT, one gets weakness of competitors. Studies suggest that targeting ones’ weakness consistently will make them to work on that and can improve turning into stronger instead.

Thanks for reading…

Advertisements

Organize around outcomes, not tasks

images

I was reading Strategy, process, content and context (Bob and Ron) and found this interesting.

‘Organize around outcomes, not tasks’

The principle says to have one person perform all the steps in a process by designing that person’s job around an objective or outcome instead of a single task. The redesign at Mutual Benefits Life, where individual case managers perform the entire application approval process, is the quintessential example of this. I completely support this because of many reasons and I hope others will also agree with me.

The redesign of an electronics company is another example. It had separate organizations performing each of the five steps between selling and installing the equipment. One group determined customer requirements, another translated those requirements into internal product codes, a third conveyed that information to various plants and warehouses, a fourth received and assembled the components, and fifth delivered and installed the equipment. The process was based on the centuries-old notion of specialized labor and on the limitations inherent in paper files. The departments each processed a specific set of skills, and only one department at a time could do its work.

The customer order moved systematically from step to step. But this sequential processing caused problems. The people getting the information from the customer in step one had to get all the data anyone would need throughout the process, even if it wasn’t needed until step five. In addition, the many hand-offs were responsible were responsible for numerous errors and misunderstandings. Finally, any question about the customer requirements that arose late in the process had to be referred back to the people doing step one, resulting in delay and rework.

When company re engineered, it eliminated the assembly line approach. It compressed responsibility for the various steps and assigned it to one person, the ‘customer service representative.’ The person now oversees the whole process – talking the order, translate the ordering it into product codes, getting the components installed. The customer service rep expedite and coordinates the process, much like a general contractor. And the customer has just one contract, who always knows the status of the order.

Why RESTFUL API?

Why RESTFUL API?

Before understanding what REST can provide in architecture, this is a good time to discuss REST in more detail. Dr. Roy Fielding, the creator of the architectural approach
called REST, looked at how the Internet, a highly distributed network of independent resources, worked collectively with no knowledge of any resource located on any server. Fielding applied those same concepts to REST by declaring the following four major constraints.
1. Separation of resource from representation. Resources and representations must be loosely coupled. For example, a resource may be a data store or a chunk of code, while the representation might be an XML or JSON result set or an HTML page.
2. Manipulation of resources by representations. A representation of a resource with any metadata attached provides sufficient information to modify or delete the resource on the server, provided the client has permission to do so.
3. Self-descriptive messages. Each message provides enough information to describe how to process the message. For example, the “Accept application/xml” command tells the parser to expect XML as the format of the message.
4. Hypermedia as the engine of application state (HATEOAS). The client interacts with applications only through hypermedia (e.g., hyperlinks). The representations reflect the current state of the hypermedia applications.

Let’s look at these constraints one at a time. By separating the resource from its representation, we can scale the different components of a service independently. For example, if the resource is a photo, a video, or some other file, it may be distributed across a content delivery network (CDN), which replicates data across a high-performance distributed network for speed and reliability. The representation of that resource may be an XML message or an HTML page that tells the application what resource to retrieve. The HTML pages may be executed on a web server farm across many servers in multiple zones in Amazon’s public cloud—Amazon Web Services (AWS)—even though the resource (let’s say it is a video) is hosted by a third-party content delivery network (CDN) vendor like AT&T. This arrangement would not be possible if both the resource and the representation did not adhere to the constraint.

The next constraint, manipulation of resources by representations, basically says that resource data (let’s say it is a customer row in a MySQL table) can only be modified or deleted on the database server if the client sending the representation (let’s say it is an XML file) has enough information (PUT, POST, DELETE) and has permission to do so (meaning that the user specified in the XML message has the appropriate Architecting the Cloud: Design Decisions for Cloud Computing Service Models (SaaS, PaaS, and IaaS) database permissions). Another way to say that is the representation should have everything it needs to request a change to a resource provider assuming the requester has the appropriate credentials.

The third constraint simply says that the messages must contain information that describes how to parse the data. For example, Twitter has an extensive library of APIs that are free for the public to use. Since the end users are unknown entities to the architects at Twitter, they have to support many different ways for users to retrieve data. They support both XML and JSON as output formats for their services. Consumers of their services must describe in their requests which format their incoming messages are in so that Twitter knows which parser to use to read the incoming messages. Without this constraint, Twitter would have to write a new version of each service for every different format that its users might request. With this constraint in place, Twitter can simply add parsers as needed and can maintain a single version of its services.

The fourth and most important constraint is HATEOAS. This is how RESTful services work without maintaining application state on the server side. By leveraging hypermedia as the engine of application state (HATE-OAS), the application state is represented by a series of links— uniform resource identifiers or URIs—on the client side, much like following the site map of a website by following the URLs. When a resource (i.e., server or connection) fails, the resource that resumes working on the services starts with the URI of the failed resource (the application state) and resumes processing.

A good analogy of HATEOAS is the way a GPS works in a car. Punch in a final destination on the GPS and the application returns a list of directions. You start driving by following these directions. The voice on the GPS tells you to turn when the next instruction is due. Let’s say you pull over for lunch and shut off the car. When you resume driving, the remaining directions in the trip list pick right up where they left off. This is exactly how REST works via hypermedia. A node failing is similar to shutting your car off for lunch and another node picking up where the failed node left off is similar to restarting the car and the GPS. Make sense?
Why are the four constraints of REST so important when building solutions in the cloud? The cloud, like the Internet, is a massive network of independent resources that are designed to be fault-tolerant. By following the constraints of REST, the software components that run in the cloud have no dependencies on the underlying infrastructure that may fail at any time. If these four constraints are not followed, it creates limitations on the application’s ability to scale and to fail over to the next available resource.

As with any architectural constraint, there are trade-offs. The more abstraction that is built into an architecture, the more flexible and agile the architecture will be, but it comes with a price. Building RESTful services correctly takes more up-front time because building loosely coupled services is a much more involved design process. Another trade-off is performance. Abstraction creates overhead, which can impact performance. There may be some use cases where the performance requirements far exceed the benefits of REST and, for that particular use case, another method may be required. There are other design issues to be aware of that are covered in the next section.

The Challenges of Migrating Legacy Systems to the Cloud One of the challenges companies have when they decide to port applications from on-premises to the cloud is that many of their legacy systems are reliant on ACID transactions. ACID (atomicity, consistency, isolation, durability) transactions are used to ensure that a transaction is complete and consistent. With ACID transactions, a transaction is not complete until it is committed and the data is up to date. In an onpremises environment where data may be tied to a single partition, forcing consistency is perfectly acceptable and often the preferred method.

In the cloud, there is quite a different story. Cloud architectures rely on Basically Available, Soft State, Eventually Consistent (BASE) transactions. BASE transactions acknowledge that resources can fail and the data will eventually become consistent. BASE is often used in volatile environments where nodes may fail or systems need to work whether the user is connected to a network or not. This is extremely important as we move into the world of mobile,
where connectivity is spotty at times. Getting back to the legacy system discussion, legacy systems often rely on ACID transactions, which are designed to run in a single partition and expect the data to be consistent. Cloud-based architectures require partition tolerance, meaning if one instance of a compute resource cannot complete the task, another instance is called on to finish the job. Eventually the discrepancies will be reconciled and life will go on its merry way. However, if a legacy system with ACID transactionality is ported and not modified to deal with partition tolerance, users of the system will not get the data consistency they are accustomed to and they will challenge the quality of the system. Architects will have to account for reconciling inconsistencies, which is nothing new. In retail they call that balancing the till, which is an old way of saying making sure the cash in the drawer matches the receipt tape at the end of the day. But many legacy applications were not designed to deal with eventual consistency and will frustrate the end users if they are simply ported to the cloud without addressing this issue. What about those mega-vendors out there whose legacy applications are now cloud-aware applications? Most of those rebranded dinosaurs are actually running in a single partition and don’t really provide the characteristics of cloud-based systems such as rapid elasticity and resource pooling. Instead, many of them are simply large, monolithic legacy systems running on a virtual machine at a hosted facility, a far cry from being a true cloud application. It is critical that architects dig under the covers of these vendor solutions and make sure that they are not being sold snake oil.
There is a new breed of vendors that offer cloud migration services. It is important to note that these solutions are simply porting the legacy
architecture as is. What that means is that if the legacy applications can only run in a single tenant, they will not be able to take advantage of
the elasticity that the cloud offers. For some applications, there may be no real benefit for porting them to the cloud.

Summary
Architecting solutions for cloud computing requires a solid understanding of how the cloud works. To build resilient solutions that scale, one must design a solution with the expectation that everything can and will fail. Cloud infrastructure is designed for high availability and is partition tolerant in nature. Migrating single-partition applications to the cloud makes the migration act more like a hosting solution rather than a scalable cloud solution. Building stateless, loosely coupled, RESTful services is the secret to thriving in this highly available, eventually consistent world. Architects must embrace this method of building software to take advantage of the elasticity that the cloud provides.

References
Bloomberg, J. (2013). The Agile Architecture Revolution: How Cloud Computing, REST-Based SOA, and Mobile Computing Are Changing Enterprise IT. Hoboken, NJ: John Wiley & Sons.
Bloomberg, J. (2011, June 1). “BASE Jumping in the Cloud: Rethink Data Consistency.” Retrieved from
http://www.zapthink.com/2011/06/01/base-jumping-in-the- cloud-rethinking-data-consistency/.
Fielding, R. (2000). “Representational State Transfer (REST),” in “Architectural Styles and the Design of Network-based Software Architectures.” Ph.D. Dissertation, University of California, Irvine. Retrieved fromhttp://www.ics.uci.edu/~fielding/pubs/dissertation/rest_
arch_style.htm.
Hoff, T. (2013, May 1). “Myth: Eric Brewer on Why Banks Are BASE Not ACID—Availability Is Revenue.” Retrieved from http://highscalability.com/blog/2013/5/1/myth-eric-brewer- on-why-banks-are-base-not-acid availability.html.
Architecting the Cloud: Design Decisions for Cloud Computing Service Models (SaaS, PaaS, and IaaS)

 

Defence-In-Depth

The question is should you use one firewall for perimeter and internal? Some would argue if an attacker can hack the first firewall then he/she can definitely attack the internal firewall then why should I spent for 2 firewalls. Another argument single firewall proponents bring to the table is if there is any configuration mismatch between internal and perimeter then there is no use for 2 firewalls. For example, if by human error if anything is bypassed then the doors for attackers are wide opened. In addition to this, you have to protect your environment from worms and malware as well which are in the form of bots and keep on looking for loopholes to exploit. So, don’t think someone in China or US is waiting for you to make mistake 🙂

If you read CISSP, one of the very basic design principles is to have the defence in-depth. This means you have layers security which would make the job hectic for the attackers, viz. IPS at the firewall, Anti-virus at the firewall, stateful inspection at the firewall, anomalies detection at the firewall, then security modules are routers, internal firewall, the WAF then firewall of the servers, then if possible host-level Anti-virus and IPS. See how many such checkpoints are there? Many right?

Likewise, government agencies suggest to use physical layer wherever possible, like using physical IPS, 2 firewalls etc but nowadays since virtualization came and everyone is looking to save cost the regulatory bodies are making such requirement mandatory because a converged setup with one physical hardware imposes risk if the hardware itself has the bug and been compromised. The risk and business impact would be huge including the brand image if any such incident occurs. Thus, one should be very careful in saving cost and not at the cost of brand image and lifeline of any company. Attackers are learning and no software is full proof save. Let throw some light on Defence-in-depth strategy.

A good Defense-in-Depth strategy involves many different technologies, such as Intrusion Detection, Content Filtering, and Transport Layer Security. The single most important element, however, is a system of internal firewalls. Proper deployment of these devices can address concerns that we have from security:

§ Employees will not have unrestricted access to the entire network, and their activity can be monitored.

§ Partners, customers, and suppliers can be given limited access to whatever resources they require, while maintaining isolation of critical servers.

§ Critical servers can be closely monitored when they are isolated behind an internal firewall. Any malicious activity would be much easier to detect, since the firewall has a limited amount of traffic passing through it.

§ Remote users can be restricted to certain portions of the network, and VPN traffic can be contained and easily monitored.

§ A security breach in one segment of the network will be limited to local machines, instead of compromising the security of the entire network. With a system of internal firewalls in place, we can come much closer to our ideal network. Instead of an all-or-nothing security posture, we can achieve Defense-in-Depth by forcing an attacker to penetrate multiple layers of security to reach mission-critical servers.

Conclusion:

The best practice, and we have been engaged with medium to high-end customers including finance and insurance and have noticed dual firewall policy for DMZ and internal LAN and that is the most secure approach we recommend. It is also the most secure approach, according to Stuart Jacobs, is to use two firewalls to create a DMZ. The first firewall (also called the “front-end” or “perimeter” firewall) must be configured to allow traffic destined to the DMZ only. The second firewall (also called “back-end” or “internal” firewall) only allows traffic from the DMZ to the internal network.

This setup is considered more secure since two devices would need to be compromised. There is even more protection if the two firewalls are provided by two different vendors, because it makes it less likely that both devices suffer from the same security vulnerabilities. In a scenario when there is a bug on one firewall and that imposes threat on the whole infrastructure is minimized by having firewall from two different vendors. For example, accidental misconfiguration is less likely to occur the same way across the configuration interfaces of two different vendors, and a security hole found to exist in one vendor’s system is less likely to occur in the other one.

Azure Machine Learning Benefits & Pitfalls

Today, allow me to write something on Machine learning and give verdict on Azure Machine Learning and its pitfalls. Let’s start with what is machine learning, it’s the construction and study of algorithms that can learn from data. There are two approaches for machine learning and they are supervised learning and unsupervised learning. Decision making in ML is done through regression , classification & clustering are the decisions taken in ML to solve problems.

Some examples for supervised and unsupervised are given below for you to understand.

In unsupervised learning, data points have no labels associated with them. Instead, the goal of an unsupervised learning algorithm is to organize the data in some way or to describe its structure. This can mean grouping it into clusters or finding different ways of looking at complex data so that it appears simpler or more organized.

  1. To identify patterns in data – unsupervised learning
  2. Study the past – unsupervised learning
  3. Learning from the historical data to Predict / Recommend – Supervised learning

A supervised learning algorithm looks for patterns in those value labels. It can use any information that might be relevant—the day of the week, the season, the company’s financial data, the type of industry, the presence of disruptive geopolitical events—and each algorithm looks for different types of patterns. After the algorithm has found the best pattern it can, it uses that pattern to make predictions for unlabeled testing data—tomorrow’s prices.

Supervised learning is a popular and useful type of machine learning. With one exception, all the modules in Azure Machine Learning are supervised learning algorithms. There are several specific types of supervised learning that are represented within Azure Machine Learning: classification, regression, and anomaly detection.

The steps are below –

  1. Plan data storage , setup Environment and Preprocess data happens out side the ML system
  2. Setting up environment includes preparing Storage environment , pre processing environment and ML Workspace
  3. HDInsight can be used for preprocessing the data

Microsoft Azure Machine Learning, a fully-managed cloud service for building predictive analytics solutions, helps overcome the challenges most businesses have in deploying and using machine learning.

Now comes the pros and cons –

Benefits –

  1. No data limit for pulling data from Azure storages and hdfs system.
  2. Azure ML is a much friendlier set of tools, and it’s less restrictive on the quality of the training data
  3. Azure ML’s tools make it easy to import training data, and then tune the results
  4. On click publishing facilities make the data model published as web service
  5. Cost of maintenance is less compared to on premise analytics solutions
  6. Drag, drop and connect structures are available to make an experiment
  7. Built in R module , Support for python and options for custom R code for extensibility
  8. Security for Azure Ml Service relies on Azure security measures

Pitfalls –

×10 GB data limit for Flat file processing

×Predictive Model Mark-up Language is not supported, however custom R and Python code can be used to define a module

×There is no version control or Git integration for experiment graphs.

×Only Smaller amount of data can be read from systems like Amazon S3

Verdict – If you wish to run deep learning and need resources at times and not always, cloud is the fantastic option.

Cheers 🙂

Transferring Registration for a Domain to Amazon Route 53

I am writing this article to guide you on the process of transferring domain and DNS service ‘or’ DNS service from one service provider to another. If you skip any of these steps then your domain will be unavailable. The steps are below:

Step 1: Confirm that Amazon Route 53 Supports the Top-Level Domain. Check through –

For Top level domains – http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/registrar-tld-list.html#registrar-tld-list-generic

Some of the top-level domains are – .uk, .co.uk, .me.uk, and .org.uk domains

Step 2: Transfer Your DNS Service to Amazon Route 53 or Another DNS Service Provider.

Some registrar provides free DNS service when you purchase a domain that’s why you should transfer DNS first. If the registrar for your domain is also the DNS service provider for the domain, transfer your DNS service to Amazon Route 53 or another DNS service provider before you continue with the process to transfer the domain registration.

Step 3: Change Settings with the Current Registrar

Using WHOIS query you can find out whether the domain is protected or not. Before that you need to disable protect of the Domain. ICANN, the governing body for domain registrations, requires that you unlock your domain before you transfer it.

Step 4: Get the Names of Your Name Servers

If you wish to change the DNS service but would like to transfer the Domain service to Amazon route 53, then follow the procedure that your DNS service provider has given you.

And if it is vice-versa, you have DNS from Route 53 and Domain from other service provider then you can skip step 4. Be happy J and go to Step 5.

Step 5: Request the Transfer

To transfer domain registration to Amazon Route 53 for up to five domains

  1. Open the Amazon Route 53 console at https://console.aws.amazon.com/route53/.
  2. In the navigation pane, choose Registered Domains.
  3. Choose Transfer Domain.
  4. Enter the name of the domain for which you want to transfer registration to Amazon Route 53, and choose Check.
  5. If the domain registration is available for transfer, choose Add to cart.

If the domain registration is not available for transfer, the Amazon Route 53 console lists the reasons. Contact your registrar for information about how to resolve the issues that prevent you from transferring the registration.

  1. If you want to transfer other domain registrations, repeat steps 4 and 5.
  2. When you’ve added all the domain registrations that you want to transfer, choose Continue.
  3. For each domain name that you want to transfer, enter the authorization code.

Note:

Some registrars stop providing DNS service as soon as you request a transfer to another registrar. If the current registrar disables DNS service, your domain will become unavailable on the internet. The transfer process will take 2 days and if the current registrar stops the DNS service then the domain will be offline from internet.

  1. Provide contact details.
  2. Postal and Zip codes
  3. Complete the purchase

Step 12: Click the Link in the Authorization Email

Once the order is complete, an email will arrive in your inbox.

Step 13: Update the Domain Configuration

You can now transfer locks, automatic renewal etc.. settings according to your new domain.

Why should you stop trading Bitcoin? – theory of intrinsic value for “David Ricardo”

Bitcoin is virtual currency and has been making news and growing at a rate of 10% every day. I have done a lot of research on Bitcoin in 2013/14 [https://abfreshmind.wordpress.com/2014/01/15/bitcoin-to-bring-innovation-and-prove-public-policy-benefits/] and now since it has been making headlines I thought of writing about it and warning people on its uncertain nature. Bitcoin is not at all sustainable and it is one of the heavily manipulated currency, why so? Let’s say you bought bitcoin @$1000 and sold to someone in @$10000. You have a profit of $9000 but are you paying tax on that? The burden of proof lies with you to pay the tax and if not you are defaulting and can be in trouble. The bitcoin uncertain price has no intrinsic value to it and you can manipulate the market like that. That’s why a very few people own a huge amount of share in it. It is unlike buying Google or Apple share and selling at profit of $10 or 20. I don’t know anyone is paying tax on the profit received from Bitcoin.

The cost of mining a bitcoin is computed power which transfers the ownership from one to another which is currently done in Japan and in Iceland where the cooling cost is less and thus, TCO is less for running bitcoin mining servers. So, coming back to the point, Bitcoin has been manipulated so badly in terms of buying and selling at any price and that has resulted in this bubble. There is certainly a point when this will burst and there will be the huge consequence.

There are many speculations considering bitcoin equivalent to Gold or even better option than Gold. Even I have written an article of Bitcoin becoming more like a commodity when people were not selling and holding it in 2013/14 (https://abfreshmind.wordpress.com/2014/01/17/mistakes-from-satoshi-nakamoto-in-developing-bitcoin/).

There is no intrinsic value attached to it, yes, going back to the theory of intrinsic value of  “David Ricardo”. If you manufacture a product @200 and let’s say that its cost price but the market value is set @180. Why would someone pay @200 to you? The risk for Bitcoin is its intrinsic value and use for the good purpose. Another omnipresent risk is the Regulatory Risk through which Bitcoins are a rival to government currency and may be used for black market transactions, money laundering, illegal activities or tax evasion. SEC and FBI are against it and it is not a good thing to put your money on. We have seen the great depression in 1929 and in 2008 but the bubble burst from Bitcoin is awaited.

Why Fortinet Suits Being Internal Firewall?….Where Cisco Ranks Well?

I am comparing Cisco Firehouse vs Palo Alto vs Fortinet for Perimeter Firewall as an option.

Cisco is pretty good with IPS and been ranked #1 in Gartner. While in NGFW (next gen firewall ranking) Cisco is placed in Challengers quadrant. Here I am listing a few of the features that Cisco doesn’t provide in their Cisco Firehouse model. However, Cisco is a fighter and will definitely come up with feature enhancement pretty soon.

  • Integarted Antrivirus
  • Protocol scanning (HTTPS)
  • SSL VPN
  • Encrypted VPN Inspection
  • SSL Client OS Support

Now lets through some light on Fortinet as a product and its limitations. Fortinet as a firewall is having all the required feature that a NGFW should have but there are many ambiguity in the market for Fortinet. Many would prefer Fortinet firewall in their environment on-premise or on cloud but is that a good/smart choice. Should you go with Fortinet as perimeter firewall? Some look to save cost for a better price/performance ratio but is that a smart choice? Let’s discuss some of the limitations below. Fortinet is one of the rarest firewall to give WAN optimization etc but do we really need those.

  1. Its attach rate for cloud-based sandboxing is low, and the feature has received few improvements since its first release. some prospective customers with high-risk exposure still express doubts regarding Fortinet’s ability to meet their security requirements.
  2. Fortinet does not offer the direct vendor support and premium subscriptions that large enterprise clients might require.
  3. Centralised and cloud-based management have made insufficient progress to positively influence Fortinet’s score during technical evaluation.
    WAN optimisation does not work for encrypted traffic; avoid optimisation for encrypted network traffic.
  4. Some feature like WAN optimisation that Fortinet supports and Palo Alto doesn’t is/are basically an additional feature one might not use in environment. The following application control 2.0 feature do not work in combination with WAN optimisation.
  • SSL interception
  • Virus scanning in the firewall
  • ATP – Advanced threat protection

Fortinet scores pretty well on Gartners magic quadrant but it is also a second choice when security comes in. One workaround for better price/throughput solution would be to go with Cisco IPS devices with Fortinet firewalls.

Will You Use Kafka With Lambda Or Kinesis Stream?

Today, I am comparing why one should really be careful while opting Kafka with Lambda. While Building customer centric solution one might think of opting for an alternative better and cheaper solution. Let’s first look at characteristics of Kafka clusters and later this article will discuss characteristics of Kinesis stream.

Characteristics of a Kafka Cluster:

  1. Kafka clusters are made up of 4 core components: topics, partitions, brokers, and Zookeeper
  2. Topics are used to group messages of the same type to simplify access by consumers
  3. Partitions are data stores that hold topic messages. They can be replicated across several brokers
  4. Brokers are nodes in a Kafka cluster and can hold multiple partitions across several topics
  5. Zookeeper is an Apache service that Kafka relies on to coordinate Kafka brokers. This includes leader election, coordination between broker consumers and producers, and broker state tracking

Characteristics of a kinesis stream:

  1. A single Kinesis stream is equivalent to a topic in Kafka.
  2. Each Kinesis stream is made up of a configurable number of shards.
  3. Shards are equivalent to partitions in Kafka terminology.
  4. Shards allow a steam to be scaled dynamically in response to demand fluctuations. To understand what a shard is, think of a single Kinesis stream as a highway, and each shard is a lane. A Kinesis stream’s throughput can be increased by adding more shards – similar to how a highway’s throughput can be increased by adding more lanes.
  5. Consumers can attach a partition key to each data sent to Kinesis to group data by shards. This can be very helpful to determine how data is routed when shards are added or removed in a stream.
    1. The partition key is designed by the stream creator to reflect how the data should be split in case more shards are added.
    2. It is important to keep in mind that all of this is happening in a single topic/stream. Partition keys are used to determine how data is routed across shards within a single topic/stream.
    3. Example: A stream has a single shard but 4 producers each attaching their unique partition key to the data when they insert it into the stream. Demand starts low with 1 shard being able to support all 4 producers. When demand increases, three more shards can be added for a total of 4 shards in this stream. Based on the partition key design, Kinesis can map the partition keys to the new shards and each producer will get its own shard.
  6. A Kinesis stream can have a minimum of 1 shard and a maximum of 50 (actual maximum is region specific).
  7. Each shard can support up to 2MB/sec data read.
  8. Each shard can support up to 1,000 writes per second, for a maximum of 1MB/sec.
  9. Maximum size of a single data blob in a stream is 1MB.
  10. Default data retention per stream is 24 hours. Increasing this number will increase the per stream cost.
  11. The data retention can be increased in hourly increments up to a maximum of 7 days.

Amazon Kinesis Streams is a fully managed service that makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. It enables you to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. Apache Kafka is an open-source streaming data solution that you can run on Amazon EC2 to build real-time applications. (AMAZON, https://aws.amazon.com/real-time-data-streaming-on-aws/)