{"id":73077,"date":"2017-02-09T13:17:32","date_gmt":"2017-02-09T13:17:32","guid":{"rendered":"https:\/\/www.red-gate.com\/simple-talk\/uncategorized\/policy-managed-oracle-rac-one-node-databases\/"},"modified":"2021-07-14T13:06:56","modified_gmt":"2021-07-14T13:06:56","slug":"policy-managed-oracle-rac-one-node-databases","status":"publish","type":"post","link":"https:\/\/www.red-gate.com\/simple-talk\/databases\/oracle-databases\/policy-managed-oracle-rac-one-node-databases\/","title":{"rendered":"Policy-Managed Oracle RAC One Node Databases"},"content":{"rendered":"<p>Oracle RAC One Node, introduced in Oracle Database 11<em>g <\/em>Release 2 (11.2.0.2), is a single instance of an Oracle RAC-enabled database running on one node in a cluster. It lets you consolidate multiple databases onto a single cluster while providing high availability benefits of failover protection to the single instance databases.<\/p>\n<p>Oracle RAC One Node databases can be configured to be administrator-managed or policy-managed.<\/p>\n<p><strong>Administrator-managed Oracle RAC One Node<\/strong> <strong>Database: <\/strong>The database instance executes on one of the servers from the specified list of candidate nodes. Candidate servers reside in the Generic server pool and as long as at least one server from the candidate list is available, database availability is ensured.<\/p>\n<p><strong>Policy-managed Oracle RAC One Node Database: <\/strong>The database instance executes on one of the members of the server pool(s) associated with the database. To ensure availability of the database, server pools need to be configured such that a server will be available for the database to fail over to in case its current node becomes unavailable.<\/p>\n<p>In this article, I will discuss various options available to configure server pools for policy-managed RAC One Node databases.<\/p>\n<h2>Environment:<\/h2>\n<ul>\n<li>Oracle Clusterware version: 12.1.0.2<\/li>\n<li>Oracle database version: 12.1.0.2<\/li>\n<li>Type of cluster: Flex<\/li>\n<li>Hub nodes: <strong>host01<\/strong>, <strong>host02<\/strong>, <strong>host03<\/strong><\/li>\n<li>RAC One Node policy managed database: <strong>r1pmdb<\/strong><\/li>\n<\/ul>\n<h2>Server Pool Configuration for RAC One Node Databases<\/h2>\n<p>Server pools for RAC One Node databases can be configured as one of the following:<\/p>\n<ul>\n<li>Sever pool of size 1<\/li>\n<li>Server pool of size &gt; 1<\/li>\n<li>Server pool associated with a category (Oracle Clusterware 12c onwards)<\/li>\n<\/ul>\n<p>Let\u2019s discuss these possibilities one by one.<\/p>\n<h2>Server pool of size 1<\/h2>\n<p>Server pool of size 1 can be used for RAC One Node databases by setting the minimum as well as maximum size of the server pool to 1 so that there will be only one server in the server pool. To ensure availability of the database, the IMPORTANCE of the server pool should be set higher than all other server pools in the cluster, so that, if the only server in the pool fails, a new server from the free server pool or another server pool is relocated into the server pool, as required.<\/p>\n<h3><strong>Demonstration<\/strong><\/h3>\n<ul>\n<li>Create a server pool <strong>r1pool<\/strong> (to host RAC One Node database) of size 1:<\/li>\n<\/ul>\n<pre>[oracle@host02 root]$ srvctl add srvpool -serverpool <span style=\"color: red;\"><strong>r1pool -min 1 -max 1<\/strong><\/span>\r\n\r\n[oracle@host02 root]$ srvctl config srvpool -serverpool r1pool\r\n<span style=\"color: red;\"><strong>Server pool name: r1pool<\/strong><\/span>\r\nImportance: 0, <span style=\"color: red;\"><strong>Min: 1, Max: 1<\/strong><\/span>\r\nCategory: hub\r\nCandidate server names:<\/pre>\n<ul>\n<li>Create another server pool, <strong>pool1<\/strong>, of size 1:<\/li>\n<\/ul>\n<pre>[oracle@host02 root]$ srvctl add srvpool -serverpool <span style=\"color: red;\"><strong>pool1 -min 1 -max 1<\/strong><\/span>\r\n\r\n[oracle@host02 root]$ srvctl config srvpool -serverpool pool1\r\n<span style=\"color: red;\"><strong>Server pool name: pool1<\/strong><\/span>\r\nImportance: 0, <span style=\"color: red;\"><strong>Min: 1, Max: 1<\/strong><\/span>\r\nCategory: hub\r\nCandidate server names:<\/pre>\n<ul>\n<li>Currently, one server has been assigned to each of the <strong>Free<\/strong>, <strong>pool1<\/strong> and <strong>r1pool<\/strong> server pools.<\/li>\n<\/ul>\n<pre>[root@host02 ~]# crsctl stat serverpool\r\nNAME=Free\r\nACTIVE_SERVERS=host01\r\n\r\nNAME=Generic\r\nACTIVE_SERVERS=\r\n\r\nNAME=ora.pool1\r\nACTIVE_SERVERS=host03\r\n\r\nNAME=ora.r1pool\r\nACTIVE_SERVERS=host02<\/pre>\n<ul>\n<li>Using DBCA, create a policy managed RAC One Node database <strong>r1pmdb<\/strong> assigned to server pool <strong>r1pool:<\/strong><\/li>\n<\/ul>\n<pre>[oracle@host02 root]$ srvctl config database -d r1pmdb\r\n\r\nDatabase unique name: r1pmdb\r\n<span style=\"color: red;\"><strong>Database name: r1pmdb<\/strong><\/span>\r\nOracle home: \/u01\/app\/oracle\/product\/12.1.0\/dbhome_1\r\nOracle user: oracle\r\nSpfile: +DATA\/R1PMDB\/PARAMETERFILE\/spfile.282.929472875\r\nPassword file: +DATA\/R1PMDB\/PASSWORD\/pwdr1pmdb.286.929472177\r\nDomain:\r\nStart options: open\r\nStop options: immediate\r\nDatabase role: PRIMARY\r\nManagement policy: AUTOMATIC\r\n<span style=\"color: red;\"><strong>Server pools: r1pool<\/strong><\/span>\r\nDisk Groups: DATA\r\nMount point paths:\r\nServices: sr1pmdb\r\n<span style=\"color: red;\"><strong>Type: RACOneNode<\/strong><\/span>\r\nOnline relocation timeout: 30\r\nInstance name prefix: r1pmdb\r\nCandidate servers:\r\nOSDBA group: dba\r\nOSOPER group: oper\r\nDatabase instances:\r\n<span style=\"color: red;\"><strong>Database is policy managed<\/strong><\/span>\r\n<\/pre>\n<ul>\n<li>We can see that the database <strong>r1pmdb<\/strong> is currently executing on host02, the only member of server pool <strong>r1pool:<\/strong><\/li>\n<\/ul>\n<pre class=\"\">[root@host02 ~]# srvctl status database -d r1pmdb\r\n\r\n<span style=\"color: red;\"><strong>Instance r1pmdb_1 is running on node host02<\/strong><\/span>\r\nOnline relocation: INACTIVE\r\n<\/pre>\n<ul>\n<li>As we stop CRS on <strong>host02<\/strong> where <strong>r1pmdb<\/strong> is currently running, <strong>host02<\/strong> ceases to be a part of the cluster and <strong>host01,<\/strong> which is free server, moves to <strong>r1pool <\/strong>so that database <strong>r1pmdb<\/strong> now runs on <strong>host01<\/strong> \u2013 the newly-assigned member of sever pool <strong>r1pool.<\/strong><\/li>\n<\/ul>\n<pre>[root@host02 ~]# crsctl stop crs \u2013f\r\n\r\n[root@host01 ~]# crsctl stat serverpool\r\nNAME=Free\r\nACTIVE_SERVERS=\r\n\r\nNAME=Generic\r\nACTIVE_SERVERS=\r\n\r\nNAME=ora.pool1\r\nACTIVE_SERVERS=host03\r\n\r\n<span style=\"color: red;\"><strong>NAME=ora.r1pool<\/strong><\/span>\r\n<span style=\"color: red;\"><strong>ACTIVE_SERVERS=host01<\/strong><\/span>\r\n\r\n[root@host01 ~]# srvctl status database -d r1pmdb\r\n\r\n<span style=\"color: red;\"><strong>Instance r1pmdb_1 is running on node host01<\/strong><\/span>\r\nOnline relocation: INACTIVE<\/pre>\n<ul>\n<li>Let\u2019s stop CRS on <strong>host01<\/strong> where database <strong>r1pmdb<\/strong> is currently running. It can be seen that <strong>host03, <\/strong>the only surviving server in the cluster<strong>,<\/strong> which is member of server pool <strong>pool1<\/strong>, does not move to server pool <strong>r1pool <\/strong>because IMPORTANCE of <strong>r1pool<\/strong> is not higher than that of <strong>pool1. <\/strong>As a result, <strong>r1pool<\/strong> becomes empty and database <strong>r1pmdb<\/strong> is no longer running on any of the nodes.<\/li>\n<\/ul>\n<pre>[root@host02 ~]# crsctl stop crs \u2013f\r\n\r\n[root@host03 ~]# crsctl stat serverpool\r\nNAME=Free\r\nACTIVE_SERVERS=\r\n\r\nNAME=Generic\r\nACTIVE_SERVERS=\r\n\r\n<span style=\"color: red;\"><strong>NAME=ora.pool1<\/strong><\/span>\r\n<span style=\"color: red;\"><strong>ACTIVE_SERVERS=host03<\/strong><\/span>\r\n<span style=\"color: red;\"><strong>NAME=ora.r1pool<\/strong><\/span>\r\n<span style=\"color: red;\"><strong>ACTIVE_SERVERS=<\/strong><\/span>\r\n\r\n[root@host02 ~]# srvctl config srvpool -serverpool r1pool\r\n<span style=\"color: red;\"><strong>Server pool name: r1pool<\/strong><\/span>\r\n<span style=\"color: red;\"><strong>Importance: 0<\/strong><\/span>, Min: 1, Max: 1\r\nCategory: hub\r\nCandidate server names:\r\n\r\n[oracle@host02 root]$ srvctl config srvpool -serverpool pool1\r\n\r\n<span style=\"color: red;\"><strong>Server pool name: pool1<\/strong><\/span>\r\n<span style=\"color: red;\"><strong>Importance: 0<\/strong><\/span>, Min: 1, Max: 1\r\nCategory: hub\r\nCandidate server names:\r\n\r\n[root@host03 ~]# srvctl status database -d r1pmdb\r\n<span style=\"color: red;\"><strong>Database is not running<\/strong><\/span>.\r\nOnline relocation: INACTIVE<\/pre>\n<ul>\n<li>Let\u2019s increase IMPORTANCE of <strong>r1pool<\/strong> to 10. As a result, <strong>host03<\/strong> moves from <strong>pool1<\/strong> to <strong>r1pool<\/strong> and database <strong>r1pmdb<\/strong> starts executing on <strong>host03:<\/strong><\/li>\n<\/ul>\n<pre>[oracle@host03 ~]$ srvctl modify srvpool -serverpool r1pool -importance 10\r\n\r\n[oracle@host03 ~]$ srvctl config srvpool -serverpool r1pool\r\n<span style=\"color: red;\"><strong>Server pool name: r1pool<\/strong><\/span>\r\n<span style=\"color: red;\"><strong>Importance: 10<\/strong><\/span>, Min: 1, Max: 1\r\nCategory: hub\r\nCandidate server names:\r\n\r\n[root@host03 ~]# crsctl stat serverpool\r\nNAME=Free\r\nACTIVE_SERVERS=\r\n\r\nNAME=Generic\r\nACTIVE_SERVERS=\r\n\r\nNAME=ora.pool1\r\nACTIVE_SERVERS=\r\n\r\n<span style=\"color: red;\"><strong>NAME=ora.r1pool<\/strong><\/span>\r\n<span style=\"color: red;\"><strong>ACTIVE_SERVERS=host03<\/strong><\/span>\r\n\r\n[root@host03 ~]# srvctl status database -d r1pmdb\r\n\r\n<span style=\"color: red;\"><strong>Instance r1pmdb_1 is running on node host03<\/strong><\/span>\r\nOnline relocation: INACTIVE<\/pre>\n<p>Thus, if a server pool of size 1 is used to host a RAC One node database, the IMPORTANCE of the server pool should be set higher than all other server pools in the cluster to ensure availability of the database.<\/p>\n<h2>Server pool of size &gt; 1<\/h2>\n<p>We can also configure a server pool to have size &gt; 1 for a RAC one Node database. In this case, the database will execute on any one node in the server pool. If that node leaves the cluster, the database will automatically failover to another node in the server pool. When the pool falls below its minimum size, servers from Free or other server pools in the cluster may move into the deficient pool depending upon the configuration. If it is desired that RAC one Node database runs only on certain nodes in the cluster, the associated server pool can be configured accordingly. As long as there is at least one server in the server pool, RAC one node database will be available.<\/p>\n<h3>Demonstration<\/h3>\n<ul>\n<li>Let\u2019s increase the size of server pool <strong>r1pool<\/strong> to 2 and specify <strong>host02<\/strong> and <strong>host03<\/strong> as its members so that the database <strong>r1pmdb<\/strong> will execute only on <strong>host02<\/strong> and <strong>host03:<\/strong><\/li>\n<\/ul>\n<pre>[oracle@host03 ~]$ srvctl modify srvpool -serverpool <span style=\"color: red;\"><strong>r1pool -min 1 -max 2<\/strong><\/span> -category \"\"\r\n\r\n[oracle@host03 ~]$ srvctl modify srvpool -serverpool r1pool <span style=\"color: red;\"><strong>-servers \"host02,host03\"<\/strong><\/span>\r\n\r\n[oracle@host03 ~]$ srvctl config srvpool -serverpool r1poolServer pool name: <span style=\"color: red;\"><strong>r1pool<\/strong><\/span>\r\n<span style=\"color: red;\"><strong>Importance: 10<\/strong><\/span>, Min: 1, Max: 2\r\nCategory:\r\n<span style=\"color: red;\"><strong>Candidate server names: host02,host03 <\/strong><\/span>\r\n<\/pre>\n<ul>\n<li>On starting CRS on <strong>host01<\/strong> and <strong>host02<\/strong>, <strong>host02<\/strong> becomes a member of <strong>r1pool<\/strong> whereas <strong>host01<\/strong> is assigned to <strong>pool1<\/strong>.<\/li>\n<\/ul>\n<pre>[root@host01 ~]# crsctl start crs\r\n[root@host02 ~]# crsctl start crs\r\n[root@host01 ~]# crsctl stat serverpool\r\nNAME=Free\r\nACTIVE_SERVERS=\r\n\r\nNAME=Generic\r\nACTIVE_SERVERS=\r\n\r\n<span style=\"color: red;\"><strong>NAME=ora.pool1<\/strong><\/span>\r\n<span style=\"color: red;\"><strong>ACTIVE_SERVERS=host01<\/strong><\/span>\r\n<span style=\"color: red;\"><strong>NAME=ora.r1pool<\/strong><\/span>\r\n<span style=\"color: red;\"><strong>ACTIVE_SERVERS=host02 host03<\/strong><\/span><\/pre>\n<ul>\n<li>If we now stop CRS on <strong>host03<\/strong> where the database is currently executing, only one server, i.e. <strong>host02,<\/strong> is left in <strong>r1pool<\/strong>. The database automatically relocates to <strong>host02<\/strong> \u2013 the only remaining server in <strong>r1pool.<\/strong><\/li>\n<\/ul>\n<pre>[root@host01 ~]# srvctl status database -d r1pmdb\r\n<span style=\"color: red;\"><strong>Instance r1pmdb_1 is running on node host03<\/strong><\/span>\r\n\r\nOnline relocation: INACTIVE\r\n[root@host03 ~]# crsctl stop crs -f\r\n\r\n[root@host01 ~]# crsctl stat serverpool\r\nNAME=Free\r\nACTIVE_SERVERS=\r\n\r\nNAME=Generic\r\nACTIVE_SERVERS=\r\n\r\nNAME=ora.pool1\r\nACTIVE_SERVERS=host01\r\n\r\n<span style=\"color: red;\"><strong>NAME=ora.r1pool<\/strong><\/span>\r\n<span style=\"color: red;\"><strong>ACTIVE_SERVERS=host02<\/strong><\/span>\r\n\r\n[root@host01 ~]# srvctl status database -d r1pmdb\r\n<span style=\"color: red;\"><strong>Instance r1pmdb_1 is running on node host02<\/strong><\/span>\r\nOnline relocation: INACTIVE<\/pre>\n<ul>\n<li>If we stop CRS on <strong>host02<\/strong> now, <strong>r1pool<\/strong> becomes empty as <strong>r1pool<\/strong> can have only two servers, i.e. <strong>host02<\/strong> and <strong>host03,<\/strong> and both of these are no longer part of the cluster. As a result, database <strong>r1pmdb<\/strong> is not running in the cluster.<\/li>\n<\/ul>\n<pre>[root@host02 ~]# crsctl stop crs -f\r\n\r\n[root@host01 ~]# crsctl stat serverpool\r\nNAME=Free\r\nACTIVE_SERVERS=\r\n\r\nNAME=Generic\r\nACTIVE_SERVERS=\r\n\r\nNAME=ora.pool1\r\nACTIVE_SERVERS=host01\r\n\r\n<span style=\"color: red;\"><strong>NAME=ora.r1pool<\/strong><\/span>\r\n<span style=\"color: red;\"><strong>ACTIVE_SERVERS=<\/strong><\/span>\r\n\r\n[root@host01 ~]# srvctl status database -d r1pmdb\r\n\r\n<span style=\"color: red;\"><strong>Database is not running.<\/strong><\/span>\r\nOnline relocation: INACTIVE<\/pre>\n<p>Thus, to restrict the execution of RAC One Node database on specific servers in the cluster, we can specify which servers should be part of the associated server pool by name. However, it would be more appropriate to assign servers based on their properties like memory, CPU count, etc. rather than their names.<\/p>\n<h2>Server pool associated with a category (Oracle clusterware 12c onwards)<\/h2>\n<p>Oracle Grid Infrastructure 12c enhances the use of server pools by introducing server attributes such as MEMORY and CPU_COUNT, which are associated with each server. A new Clusterware object defines server categories which enable you to organize servers into particular categories based on their attributes. We can configure server pool to be associated with a server category, so that server pools are defined based on server attributes rather than the sole number or name of a server that will be part of the pool. In this case, the database will execute on any one node in the server pool. If that node leaves the cluster, the database will automatically failover to another node in the server pool. When the pool falls below its minimum size, servers of the same category from Free or other server pools in the cluster may move into the deficient pool depending upon the configuration. As long as there is at least one server in the server pool, RAC one node database will be available.<\/p>\n<h3><strong>Demonstration<\/strong><\/h3>\n<ul>\n<li>Let\u2019s look at the attributes of various servers in the cluster. It can be seen that <strong>host01<\/strong> and <strong>host03<\/strong> have memory = 3063mb whereas <strong>host02<\/strong> has memory = 2502mb.<\/li>\n<\/ul>\n<pre>[root@host01 ~]# crsctl stat server host01 host02 host03 -f\r\n<span style=\"color: red;\"><strong>NAME=host01<\/strong><\/span>\r\n<span style=\"color: red;\"><strong>MEMORY_SIZE=3063<\/strong><\/span>\r\nCPU_COUNT=1\r\nCPU_CLOCK_RATE=3292\r\nCPU_HYPERTHREADING=0\r\nCPU_EQUIVALENCY=1000\r\nDEPLOYMENT=other\r\nCONFIGURED_CSS_ROLE=hub\r\nRESOURCE_USE_ENABLED=1\r\nSERVER_LABEL=\r\nPHYSICAL_HOSTNAME=\r\nSTATE=ONLINE\r\nACTIVE_POOLS=ora.pool1\r\nSTATE_DETAILS=\r\nACTIVE_CSS_ROLE=hub\r\n\r\n<span style=\"color: red;\"><strong>NAME=host02<\/strong><\/span>\r\n<span style=\"color: red;\"><strong>MEMORY_SIZE=2502<\/strong><\/span>\r\nCPU_COUNT=1\r\nCPU_CLOCK_RATE=3292\r\nCPU_HYPERTHREADING=0\r\nCPU_EQUIVALENCY=1000\r\nDEPLOYMENT=other\r\nCONFIGURED_CSS_ROLE=hub\r\nRESOURCE_USE_ENABLED=1\r\nSERVER_LABEL=\r\nPHYSICAL_HOSTNAME=\r\nSTATE=ONLINE\r\nACTIVE_POOLS=ora.r1pool\r\nSTATE_DETAILS=AUTOSTARTING RESOURCES\r\nACTIVE_CSS_ROLE=hub\r\n\r\n<span style=\"color: red;\"><strong>NAME=host03<\/strong><\/span>\r\n<span style=\"color: red;\"><strong>MEMORY_SIZE=3063<\/strong><\/span>\r\nCPU_COUNT=1\r\nCPU_CLOCK_RATE=3292\r\nCPU_HYPERTHREADING=0\r\nCPU_EQUIVALENCY=1000\r\nDEPLOYMENT=other\r\nCONFIGURED_CSS_ROLE=hub\r\nRESOURCE_USE_ENABLED=1\r\nSERVER_LABEL=\r\nPHYSICAL_HOSTNAME=\r\nSTATE=ONLINE\r\nACTIVE_POOLS=ora.r1pool\r\nSTATE_DETAILS=AUTOSTARTING RESOURCES\r\nACTIVE_CSS_ROLE=hub<\/pre>\n<ul>\n<li>We will create two categories of server:\n<ul>\n<li><strong>large<\/strong> (<code>MEMORY_SIZE<\/code> &gt;= 3000mb)<\/li>\n<li><strong>small<\/strong> (<code>MEMORY_SIZE<\/code> &lt; 3000mb)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>We can see that <strong>host01<\/strong> and <strong>host03<\/strong> belong to the <strong>large<\/strong> category whereas <strong>host02<\/strong> belongs to the <strong>small<\/strong> category.<\/p>\n<pre>[root@host01 ~]# crsctl add category <span style=\"color: red;\"><strong>large<\/strong><\/span> -attr \"EXPRESSION='(<span style=\"color: red;\"><strong>MEMORY_SIZE &gt; 3000) OR(MEMORY_SIZE = 3000 <\/strong><\/span>)'\"\r\n\r\n[root@host01 ~]# crsctl add category <span style=\"color: red;\"><strong>small<\/strong><\/span> -attr \"EXPRESSION='(<span style=\"color: red;\"><strong>MEMORY_SIZE &lt; 3000<\/strong><\/span>)'\"\r\n\r\n[root@host01 ~]# crsctl stat server -category <span style=\"color: red;\"><strong>large<\/strong><\/span>\r\nNAME=<span style=\"color: red;\"><strong>host01<\/strong><\/span>\r\nSTATE=ONLINE\r\n\r\nNAME=<span style=\"color: red;\"><strong>host03<\/strong><\/span>\r\nSTATE=ONLINE\r\n\r\n[root@host01 ~]# crsctl stat server -category <span style=\"color: red;\"><strong>small<\/strong><\/span>\r\nNAME=<span style=\"color: red;\"><strong>host02<\/strong><\/span>\r\nSTATE=ONLINE<\/pre>\n<ul>\n<li>Let\u2019s resize server pool <strong>r1pool<\/strong> to 1 and associate the <strong>large<\/strong> category with both <strong>r1pool<\/strong> and <strong>pool1:<\/strong><\/li>\n<\/ul>\n<pre>[oracle@host03 root]$ srvctl modify srvpool -serverpool r1pool -servers \"\"\r\n[oracle@host03 root]$ srvctl modify srvpool -serverpool <span style=\"color: red;\"><strong>r1pool<\/strong><\/span> -category <span style=\"color: red;\"><strong>large -min 1 -max 1<\/strong><\/span>\r\n\r\n[oracle@host03 root]$ srvctl modify srvpool -serverpool <span style=\"color: red;\"><strong>pool1<\/strong><\/span> -category <span style=\"color: red;\"><strong>large<\/strong><\/span><\/pre>\n<ul>\n<li>As a result, both servers belonging to the <strong>large<\/strong> category, <strong>host01<\/strong> and <strong>host03,<\/strong> get assigned to <strong>pool1<\/strong> and <strong>r1pool<\/strong> respectively, whereas the <strong>small<\/strong> server <strong>host02<\/strong> goes to the <strong>free<\/strong> pool. Subsequently, database <strong>r1pmdb<\/strong> starts running on <strong>host03<\/strong> which is member of <strong>r1pool.<\/strong><\/li>\n<\/ul>\n<pre>[root@host01 ~]# crsctl stat serverpool\r\nNAME=<span style=\"color: red;\"><strong>Free<\/strong><\/span>\r\nACTIVE_SERVERS=<span style=\"color: red;\"><strong>host02<\/strong><\/span>\r\n\r\nNAME=Generic\r\nACTIVE_SERVERS=\r\n\r\nNAME=<span style=\"color: red;\"><strong>ora.pool1<\/strong><\/span>\r\nACTIVE_SERVERS=<span style=\"color: red;\"><strong>host01<\/strong><\/span>\r\n\r\nNAME=<span style=\"color: red;\"><strong>ora.r1pool<\/strong><\/span>\r\nACTIVE_SERVERS=<span style=\"color: red;\"><strong>host03<\/strong><\/span>\r\n\r\n[root@host01 ~]# srvctl status database -d r1pmdb\r\n<span style=\"color: red;\"><strong>Instance r1pmdb_2 is running on node host03<\/strong><\/span>\r\nOnline relocation: INACTIVE<\/pre>\n<p>If we stop CRS on <strong>host03<\/strong> where the database is currently running, the server <strong>host01<\/strong> from <strong>pool1<\/strong> moves to <strong>r1pool<\/strong> because they both belong to the <strong>large<\/strong> category and <strong>r1pool<\/strong> has higher IMPORTANCE than <strong>pool1<\/strong>. This causes database <strong>r1pmdb<\/strong> to relocate to <strong>host01<\/strong>.<\/p>\n<pre>root@host03 ~]# crsctl stop crs \u2013f\r\n\r\n[root@host01 ~]# crsctl stat serverpool\r\nNAME=Free\r\nACTIVE_SERVERS=host02\r\n\r\nNAME=Generic\r\nACTIVE_SERVERS=\r\n\r\nNAME=ora.pool1\r\nACTIVE_SERVERS=\r\n\r\n<span style=\"color: red;\"><strong>NAME=ora.r1pool<\/strong><\/span>\r\n<span style=\"color: red;\"><strong>ACTIVE_SERVERS=host01<\/strong><\/span>\r\n\r\n[root@host01 ~]# srvctl status database -d r1pmdb\r\n<span style=\"color: red;\"><strong>Instance r1pmdb_2 is running on node host01<\/strong><\/span>\r\nOnline relocation: INACTIVE<\/pre>\n<ul>\n<li>Now, if we stop CRS on <strong>host01<\/strong> where the database is currently running, although server <strong>host02<\/strong> is free, it does not move to <strong>r1pool<\/strong> because it belongs to the <strong>small<\/strong> category. As a result, server pool <strong>r1pool<\/strong> becomes empty and the database <strong>r1pmdb<\/strong> stops running.<\/li>\n<\/ul>\n<pre>[root@host01 ~]# crsctl stop crs -f\r\n\r\n[root@host02 ~]# crsctl stat serverpool\r\n<span style=\"color: red;\"><strong>NAME=Free<\/strong><\/span>\r\n<span style=\"color: red;\"><strong>ACTIVE_SERVERS=host02<\/strong><\/span>\r\n\r\nNAME=Generic\r\nACTIVE_SERVERS=\r\n\r\nNAME=ora.pool1\r\nACTIVE_SERVERS=\r\n\r\n<span style=\"color: red;\"><strong>NAME=ora.r1pool<\/strong><\/span>\r\n<span style=\"color: red;\"><strong>ACTIVE_SERVERS=<\/strong><\/span>\r\n\r\n[root@host02 ~]# srvctl status database -d r1pmdb\r\n<span style=\"color: red;\"><strong>Database is not running.<\/strong><\/span>\r\nOnline relocation: INACTIVE<\/pre>\n<p>This demonstrates how a category can be associated with the server pool hosting a RAC one Node database. If the server pool associated with a category loses a server, a server from the free pool or another pool with move to it only if the server is of same category, i.e. RAC One Node database will run only on the servers with the correct attributes.<\/p>\n<h2>Summary<\/h2>\n<ul>\n<li>Oracle RAC One Node databases can be configured to be administrator-managed or policy-managed.<\/li>\n<li>An administrator-managed Oracle RAC One Node Database instance executes on one of the servers from the specified list of candidate nodes.<\/li>\n<li>A Policy-managed Oracle RAC One Node Database instance executes on one of the members of the server pool(s) associated with the database.<\/li>\n<li>Server pools for RAC One Node databases can be configured in different ways:\n<ul>\n<li><strong>Server pool with size 1<\/strong>: The database will execute on the only node in the server pool. The IMPORTANCE of the server pool should be set higher than all other server pools in the cluster, so that, if the only server in the pool fails, a new server from the Free server pool or another server pool is relocated into the server pool.<\/li>\n<li><strong>Server pool with size &gt; 1<\/strong>: The database will execute on any one node in the server pool. In order to restrict the execution of RAC One Node database on specific servers in the cluster, we can specify by name which servers which should be part of the associated server pool.<\/li>\n<li><strong>Server pool associated with a server category<\/strong> (Oracle Clusterware 12c onwards): The database will execute on servers with certain attributes.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h2>References:<\/h2>\n<p><a href=\"http:\/\/www.oracle.com\/technetwork\/database\/options\/raconenode\/documentation\/index.html\">http:\/\/www.oracle.com\/technetwork\/database\/options\/raconenode\/documentation\/index.html<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Oracle RAC One Node, introduced in Oracle Database 11g Release 2 (11.2.0.2), is a single instance of an Oracle RAC-enabled database running on one node in a cluster. It lets you consolidate multiple databases onto a single cluster while providing high availability benefits of failover protection to the single instance databases. Oracle RAC One Node databases can be configured to&hellip;<br \/>\n&hellip;<\/p>\n","protected":false},"author":316217,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[143533],"tags":[48454,48462,124952],"coauthors":[48555],"class_list":["post-73077","post","type-post","status-publish","format-standard","hentry","category-oracle-databases","tag-one-node-databases","tag-oracle-rac","tag-redgate-deploy"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/73077","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/users\/316217"}],"replies":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/comments?post=73077"}],"version-history":[{"count":2,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/73077\/revisions"}],"predecessor-version":[{"id":73568,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/73077\/revisions\/73568"}],"wp:attachment":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/media?parent=73077"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/categories?post=73077"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/tags?post=73077"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/coauthors?post=73077"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}