Transferring Registration for a Domain to Amazon Route 53

I am writing this article to guide you on the process of transferring domain and DNS service ‘or’ DNS service from one service provider to another. If you skip any of these steps then your domain will be unavailable. The steps are below:

Step 1: Confirm that Amazon Route 53 Supports the Top-Level Domain. Check through –

For Top level domains – http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/registrar-tld-list.html#registrar-tld-list-generic

Some of the top-level domains are – .uk, .co.uk, .me.uk, and .org.uk domains

Step 2: Transfer Your DNS Service to Amazon Route 53 or Another DNS Service Provider.

Some registrar provides free DNS service when you purchase a domain that’s why you should transfer DNS first. If the registrar for your domain is also the DNS service provider for the domain, transfer your DNS service to Amazon Route 53 or another DNS service provider before you continue with the process to transfer the domain registration.

Step 3: Change Settings with the Current Registrar

Using WHOIS query you can find out whether the domain is protected or not. Before that you need to disable protect of the Domain. ICANN, the governing body for domain registrations, requires that you unlock your domain before you transfer it.

Step 4: Get the Names of Your Name Servers

If you wish to change the DNS service but would like to transfer the Domain service to Amazon route 53, then follow the procedure that your DNS service provider has given you.

And if it is vice-versa, you have DNS from Route 53 and Domain from other service provider then you can skip step 4. Be happy J and go to Step 5.

Step 5: Request the Transfer

To transfer domain registration to Amazon Route 53 for up to five domains

  1. Open the Amazon Route 53 console at https://console.aws.amazon.com/route53/.
  2. In the navigation pane, choose Registered Domains.
  3. Choose Transfer Domain.
  4. Enter the name of the domain for which you want to transfer registration to Amazon Route 53, and choose Check.
  5. If the domain registration is available for transfer, choose Add to cart.

If the domain registration is not available for transfer, the Amazon Route 53 console lists the reasons. Contact your registrar for information about how to resolve the issues that prevent you from transferring the registration.

  1. If you want to transfer other domain registrations, repeat steps 4 and 5.
  2. When you’ve added all the domain registrations that you want to transfer, choose Continue.
  3. For each domain name that you want to transfer, enter the authorization code.

Note:

Some registrars stop providing DNS service as soon as you request a transfer to another registrar. If the current registrar disables DNS service, your domain will become unavailable on the internet. The transfer process will take 2 days and if the current registrar stops the DNS service then the domain will be offline from internet.

  1. Provide contact details.
  2. Postal and Zip codes
  3. Complete the purchase

Step 12: Click the Link in the Authorization Email

Once the order is complete, an email will arrive in your inbox.

Step 13: Update the Domain Configuration

You can now transfer locks, automatic renewal etc.. settings according to your new domain.

Advertisements

Why should you stop trading Bitcoin? -Theory of intrinsic value for “David Ricardo”

Bitcoin is virtual currency and has been making news and growing at a rate of 10% every day.

Before we begin it is quite helpful to know what it is and why is so ludacurious in nature. It is quite true that, “cryptocurrency” as a technology is homogeneously combined with currency and that intends to solve general issues which for some isn’t quite a problem at all. It is a complex solution to solve a lesser complex problem in the world. If you honestly ask me on this, my answer would be there are many other relevant issues where we can put our thoughts on and try to help people and make this world a better place to live. For now, no more red-herrings and you can explore the links below to get an idea on what is all about but believe me I don’t know about bitcoin…and I proudly say that…..

https://abfreshmind.wordpress.com/2014/01/17/mistakes-from-satoshi-nakamoto-in-developing-bitcoin/

https://abfreshmind.wordpress.com/2014/01/15/bitcoin-to-bring-innovation-and-prove-public-policy-benefits/

…..and now since it has been making headlines I thought of writing about it and warning people on its uncertain nature.

It quite aggressive to write but, bitcoin is not at all sustainable because it is one of the heavily manipulated currency, why so?

Let’s say you bought bitcoin @$1000 and sold to someone in @$10000. You have a profit of $9000 but are you paying tax on that? Not sure, or yes? You better answer that…….The burden of proof lies with you to pay the tax and if not you are defaulting and can be in trouble.

Another reason, why bitcoin is not sustainable looking thing to me is that bitcoin’s uncertain price has no intrinsic value to it and you can manipulate the market like that, like what? like play around by buying and selling and amplify speculations…..well you understand that…

That’s why a very few people own a huge amount of share in it. It is unlike buying Google or Apple share and selling at profit of $10 or 20. I don’t know anyone is paying tax on the profit received from Bitcoin, and if you do, you are a good citizen.

The cost of mining a bitcoin is computed power which transfers the ownership from one to another which is currently done in Japan and in Iceland where the cooling cost is less and thus, TCO is less for running bitcoin mining servers. Is that all necessary for currency? why this fancy name “mining”? is that confusing people with technology that it cannot go wrong and you cannot ever lose your money? Well….it is not the case…I have no idea on it….

So, coming back to the point, Bitcoin has been manipulated so badly in terms of buying and selling at any price and that has resulted in this bubble. There is certainly a point when this will burst and there will be the huge consequence.

There are many speculations considering bitcoin equivalent to Gold or even better option than Gold. Even I have written an article of Bitcoin becoming more like a commodity when people were not selling and holding it in 2013/14 (link in the beginning) . The intrinsic value is the satisfactory quotient that we take in mind when spending money and it defies the share market to certain extent. There is no intrinsic value attached to bitcoin, and yes, going back to the theory of intrinsic value of  “David Ricardo”. If you manufacture a product @200 and let’s say that its cost price but the market value is set @180. Why would someone pay @200 to you? well, this example still had intrinsic value to it but bitcoin hasn’t….The risk for Bitcoin is its intrinsic value and use for the good purpose. Another omnipresent risk is the Regulatory Risk through which Bitcoins are a rival to government currency and may be used for black market transactions, money laundering, illegal activities or tax evasion. SEC and FBI are against it and it is not a good thing to put your money on. We have seen the great depression in 1929 and in 2008 but the bubble burst from Bitcoin is awaited.

Why Fortinet Suits Being Internal Firewall?….Where Cisco Ranks Well?

I am comparing Cisco Firehouse vs Palo Alto vs Fortinet for Perimeter Firewall as an option.

Cisco is pretty good with IPS and been ranked #1 in Gartner. While in NGFW (next gen firewall ranking) Cisco is placed in Challengers quadrant. Here I am listing a few of the features that Cisco doesn’t provide in their Cisco Firehouse model. However, Cisco is a fighter and will definitely come up with feature enhancement pretty soon.

  • Integarted Antrivirus
  • Protocol scanning (HTTPS)
  • SSL VPN
  • Encrypted VPN Inspection
  • SSL Client OS Support

Now lets through some light on Fortinet as a product and its limitations. Fortinet as a firewall is having all the required feature that a NGFW should have but there are many ambiguity in the market for Fortinet. Many would prefer Fortinet firewall in their environment on-premise or on cloud but is that a good/smart choice. Should you go with Fortinet as perimeter firewall? Some look to save cost for a better price/performance ratio but is that a smart choice? Let’s discuss some of the limitations below. Fortinet is one of the rarest firewall to give WAN optimization etc but do we really need those.

  1. Its attach rate for cloud-based sandboxing is low, and the feature has received few improvements since its first release. some prospective customers with high-risk exposure still express doubts regarding Fortinet’s ability to meet their security requirements.
  2. Fortinet does not offer the direct vendor support and premium subscriptions that large enterprise clients might require.
  3. Centralised and cloud-based management have made insufficient progress to positively influence Fortinet’s score during technical evaluation.
    WAN optimisation does not work for encrypted traffic; avoid optimisation for encrypted network traffic.
  4. Some feature like WAN optimisation that Fortinet supports and Palo Alto doesn’t is/are basically an additional feature one might not use in environment. The following application control 2.0 feature do not work in combination with WAN optimisation.
  • SSL interception
  • Virus scanning in the firewall
  • ATP – Advanced threat protection

Fortinet scores pretty well on Gartners magic quadrant but it is also a second choice when security comes in. One workaround for better price/throughput solution would be to go with Cisco IPS devices with Fortinet firewalls.

Will You Use Kafka With Lambda Or Kinesis Stream?

Today, I am comparing why one should really be careful while opting Kafka with Lambda. While Building customer centric solution one might think of opting for an alternative better and cheaper solution. Let’s first look at characteristics of Kafka clusters and later this article will discuss characteristics of Kinesis stream.

Characteristics of a Kafka Cluster:

  1. Kafka clusters are made up of 4 core components: topics, partitions, brokers, and Zookeeper
  2. Topics are used to group messages of the same type to simplify access by consumers
  3. Partitions are data stores that hold topic messages. They can be replicated across several brokers
  4. Brokers are nodes in a Kafka cluster and can hold multiple partitions across several topics
  5. Zookeeper is an Apache service that Kafka relies on to coordinate Kafka brokers. This includes leader election, coordination between broker consumers and producers, and broker state tracking

Characteristics of a kinesis stream:

  1. A single Kinesis stream is equivalent to a topic in Kafka.
  2. Each Kinesis stream is made up of a configurable number of shards.
  3. Shards are equivalent to partitions in Kafka terminology.
  4. Shards allow a steam to be scaled dynamically in response to demand fluctuations. To understand what a shard is, think of a single Kinesis stream as a highway, and each shard is a lane. A Kinesis stream’s throughput can be increased by adding more shards – similar to how a highway’s throughput can be increased by adding more lanes.
  5. Consumers can attach a partition key to each data sent to Kinesis to group data by shards. This can be very helpful to determine how data is routed when shards are added or removed in a stream.
    1. The partition key is designed by the stream creator to reflect how the data should be split in case more shards are added.
    2. It is important to keep in mind that all of this is happening in a single topic/stream. Partition keys are used to determine how data is routed across shards within a single topic/stream.
    3. Example: A stream has a single shard but 4 producers each attaching their unique partition key to the data when they insert it into the stream. Demand starts low with 1 shard being able to support all 4 producers. When demand increases, three more shards can be added for a total of 4 shards in this stream. Based on the partition key design, Kinesis can map the partition keys to the new shards and each producer will get its own shard.
  6. A Kinesis stream can have a minimum of 1 shard and a maximum of 50 (actual maximum is region specific).
  7. Each shard can support up to 2MB/sec data read.
  8. Each shard can support up to 1,000 writes per second, for a maximum of 1MB/sec.
  9. Maximum size of a single data blob in a stream is 1MB.
  10. Default data retention per stream is 24 hours. Increasing this number will increase the per stream cost.
  11. The data retention can be increased in hourly increments up to a maximum of 7 days.

Amazon Kinesis Streams is a fully managed service that makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. It enables you to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. Apache Kafka is an open-source streaming data solution that you can run on Amazon EC2 to build real-time applications. (AMAZON, https://aws.amazon.com/real-time-data-streaming-on-aws/)

Comparing Enterprise Network Firewall? Who Wins?

I would be giving my views on how to select right perimeter firewall. There are many in the market but lets compared Cisco, Juniper and Palo Alto today.

NGFW products include unified threat management (UTM), nondisruptive in bump-in-the-wire configuration, NAT, stateful packet inspection, virtual private network (VPN), integrated signature-based IPS engine and application awareness.

Cisco ASA with FirePOWER Services provides an integrated defense solution with greater firewall features detection and protection threat services than other vendors. Cisco tops in terms of IDS/IPS but those are not the native role of firewall, however, Cisco Firepower gives the power of both but it is pretty new product. Cisco Firepower has come to the market last year and there is plenty to assess on. Cisco provides application visibility and control as part of the base configuration at no cost. Cisco licensing might be confusing because separate licenses are required for next-generation intrusion prevention systems (NGIPS), advanced malware protection and URL filtering.

Juniper support is from channel partners. Juniper is niche player in Gartners quadrant. Juniper SRX is the first NGFW to offer customers validated (Telcordia) 99.9999% availability (in its SRX 5000 line). Open attack signatures in the IPS also allow customers to add or customize signatures tailored for their network. Overall pretty good option but Gartner is skeptical about their security vision. Maybe better in terms of throughput/price. One can look to have Cisco IPS and Juniper decent model together since Cisco is leader in IDS/IPS.

Palo alto – One license for full UTM device. Ranked #1 in Gartner and have very less cautions in Gartner review. Overall a good device with all the NGFW features and pretty much nothing to question about. Maybe questioned slightly on support which its competitor Cisco maybe slightly ahead.

Ceph & Security, OpenStack best Practices For Storage

Abhishek Singh, OpenStack Best Practices

This article attempts to throw some light on community Openstack best practices for Ceph storage. It aims to assist storage administrators and operations engineer that are engaged in deploying multi-mode Openstack clusters.

Firstly, the author would discuss about Ceph and its benefits, and then the best practices of Ceph from different scenarios looking into the infrastructure peripheries. The author assumes that the readers have fundamental knowledge on Openstack and its deployment. Ceph is a software defined storage solution for Openstack and it is used for aggregating different storage devices including commodity storages to give an intelligent storage pool to various end-users. A properly designed Ceph can provide High availability too. Openstack Cinder is used to provide volumes and Glance provides image service. Like other object storages Ceph also needs a gateway which is an intelligent service to categorize the defined data and place into object storage and it is RadosGW. Ceph integrates into Openstack Nova and Cinder by Rados block devices. A benefit one can see in cinder with Ceph over the default volume back end local volumes managed by LVM and cinder is it is a distributed and network available solution.

Another advantageous feature that comes along with Ceph is copy-on-write that allows existing volume as a source for unmodified data of another volume. It significantly consumes less space for new virtual machine based on templates and snapshots. Using network availability and distributed storage, live migration is possible for even ephemeral disks. This proves to be handy dealing with failing hosts and during infrastructure upgrades. Ceph’s integration with QEMU also gives space to use Cinder QoS feature to control virtual machines from consuming all IOPS and storage resources.

The purpose of this article is to emphasise that cloud deployment is more exposed to threats than traditional environment. This is because storages are accessible on internet, and in addition, Ceph and other Openstack services are installed on servers and mostly with default options.

This article will now discuss on securing block and object storage using Ceph and then move to the topic of securing connectivity between Openstack and SAN/NAS solutions. RadosGW is vulnerable component in object storage because it is exposed to HTTP Restful requests. It was also suggested in Openstack Summit at Vancouver to have a proxy appliance that has a separate network with “SSL termination” enabled with proxy forwarding and web authentication filtering between virtual machines and RadosGW. Ceph is not having centralized mechanism for managing object storages, it is managed using CephX with each device. This means that clients can directly interact with the Object storage devices (OSDs), Cephx works like kerberos. Here’s the catch, CephX authentication is only between Ceph client and Ceph server hosts, it is not extended beyond the Ceph client and CephX policy doesn’t work when someone access Ceph client from remote host.

In order to exercise the functionality of monitoring, OSDs and metadata servers’ Ceph has another authentication mechanism called “Caps”. Caps also restricts access among pools, with this said users can have access to certain pools and have no access for some. In other word this authentication helps in building policies for authorization.

It is very important to understand how Ceph authenticates and the vulnerability attached to it. Ceph use keys for communication. These keys used to authenticate Ceph client are stored on server and are in plain text files which is a vulnerability for any environment. If one hacks the server the keys are exposed. In order to control this, arbitrary users, portable machines and laptops should not be configured to talk with Ceph directly because it would then require storing plaintext files to be stored in more vulnerable machines and compromise the security. As a best practice, users can login to a trusted machine with proper hardening and security and use that machine to store plaintext authentication files. So far Ceph does not include options to encrypt user data in object storage. There is a need for out-of-the-box solution to encrypt the data. Apart from these, one can implement best practices of DoS, for example: limit load from client using QEMU IO throttling features, limit the max open socket per object storage disks (OSD), limit the max open sockets per source IP and use throttling per client.

Moving forward to the second section, this article will now discuss on securing connectivity between Openstack and SAN/NAS solution which is equally important as securing block and object storage. Cinder and storage communicates through management path. Communication uses SSH, or through REST by SSL. It is advisable to keep management interface on secure LANs and use strong passwords for management accounts. Try avoid default vendor passwords, and role based security and accountability can be helpful forensic tools. Now the readers might be thinking about the efforts that can be used in securing the data path. There are many ways to do it and a strict checklist with specifications for setting hardening parameters can be used for devices and components, let say for NFS: a stricter configuration options for exports and user management can be practiced, proper access control lists (ACLs) to limit only authenticated users to see IP SAN and all other setting that can reduce the vulnerability list. The reason why proper ACL is important is because any server that resides on same IP segment as that of isci storage can access the storage theoretically and perform read/write operations. Proper control on file owned by root with permissions 600 is also advisable.

There are other ways for securing communications like CHAP that assists identifying client through username and password. When Cinder communicates with storage it generates a random key and it is used by Nova when it connects with iscsi and thus a secure connection is established. Another important area to consider is encrypting exposed traffic using “transport” and “tunnel” encryption. there are two ways to encrypt the data- Transport mode and tunnel mode. On transmitter side, transport mode only encrypts data portion and not the header whereas tunnel encrypts both header and data portion. On receiving side, IPSEC-complaint device should be able to decrypt the data packets and for that to work transmitter and receiver should share a public key that gives a secure connectivity, however, it can put some load on network in return. For those volume that uses block storages through fibre channel volume should have Zone managers which can be configured through cinder.conf for proper control.

To conclude, Openstack ecosystem is quite vulnerable when typically installed and a lot of improvements can be seen in terms of security.