Hi all, We have cut the latest Project Clearwater Release: Nazgûl. The code for this release has been tagged as release-128 in GitHub.
This release includes a couple of substantial pieces of work: We've created a more clear distinction between data that Homestead is caching and data which is mastered by Homestead-Prov. Homestead now uses memcached to store its cached data, whereas Homestead-Prov still masters its data in Cassandra (if your deployment is not using an external HSS); provisioned subscriber data remains persistently stored on disk. This move should improve general performance, as well as bringing our infrastructure on the main call path closer together, with both Sprout and Homestead now using Memcached and Astaire. Because of these changes there are a few specific upgrade steps that need to be followed to take this release. Details are included at the bottom of this release note. As well as this, we have performed some re-architecting of how the Sprout stores are laid out and set up (in particular the AoRStore and ImpiStore). This work has been undertaken to make Clearwater more flexible, and easier to maintain and develop. In general, we are seeing an increasing trend in the number of platform storage solutions available, ranging from those provided by generic cloud providers (e.g. EC2) to custom built data layers shared by a whole set of products. As ever, Project Clearwater is striving to lead the way in cloud-native architectures, and aims to leverage these technologies wherever possible. While you shouldn't see any difference caused by this as an end user, anyone interested in integrating Project Clearwater with their own choice of storage solution should find it a bit easier to crack into. This release also includes the following bug fixes: * `sudo service clearwater-etcd decommission` doesn't fully decommission Cassandra * Race conditions in Memcached UTs causing failures * Calls can fail in race condition between sites * Intermittent memcached UT failed due to race condition * Mark node failed can hang indefinitely if there's no etcd cluster * Challenged unregisters failing due to different URL escaping methods * typo in ralf alarm * Our example ENUM rules may not work correctly * OpenStack Heat templates didn't create homestead-cache schema on Vellum * Homestead-prov doesn't respond to requests in an IPv6 deployment * Ellis debian links were not configured correctly * Sprout sends reg-event NOTIFYs when somebody else SUBSCRIBEs * Sprout sends a reg-event NOTIFY to the UE whenever the P-CSCF refreshes its subscription * Homestead cannot properly handle Charging-Information Push-Profile-Requests without an IMS subscription element Upgrading to the new release For this release, our changes have a couple of added implications: * There's a new shared_config option which is required for homestead to run, `homestead_impu_store` * Vellum nodes won't run Cassandra if your deployment isn't using homestead-prov To upgrade to this release, perform the following steps: For deployments currently using Homestead-prov: · before taking the update you should add the following option to your shared_config, and upload it using `cw-upload_shared_config`: `homestead_impu_store=<the same as your existing sprout_registration_store option>` · then follow the instructions at http://docs.projectclearwater.org/en/stable/Upgrading_a_Clearwater_deployment.html. For deployments integrated with an external HSS: · If you have a deployment that's using an external HSS rather than Homestead-Prov, we recommend you re-deploy to pick up these changes correctly, and avoid conflicts. New deployments using Chef: · Chef has been updated to add the new shared_config option, so when you next want to create a Chef deployment, you should ensure you: * Pull the latest Chef changes from master * Re-upload your cookbook and roles: `knife cookbook upload clearwater -o cookbooks/` `find roles/*.rb -exec knife role from file {} \;` All-in-one images: If you are deploying an all-in-one node, the standard image (http://vm-images.cw-ngv.com/cw-aio.ova) has been updated for this release. Any issues, get in touch with us and the wider community on the Project Clearwater mailing list http://www.projectclearwater.org/community Cheers, Adam
_______________________________________________ Clearwater mailing list Clearwater@lists.projectclearwater.org http://lists.projectclearwater.org/mailman/listinfo/clearwater_lists.projectclearwater.org