There are two types of PL/SQL Procedures. Those are named and unnamed. The Named procs you can use for the stand-alone task. You can use unnamed procs for ad-hoc work and in the Shell scripts.
Anonymous Block (Unnamed)
The procedure which we don’t give is called the Anonymous procedure. There is no name over here for the procedure.
Read: How to execute a stored procedure in SQL developer
[DECLARE … optional declaration statements …] BEGIN … executable statements … [EXCEPTION … optional exception handler statements …] END;
Sample anonymous block
Here’s an example of Anonymous PL/SQL code.
SET SERVEROUTPUT ON;
DECLARE
V_MYNUMBER NUMBER(2) := 1;
BEGIN
DBMS_OUTPUT.PUT.LINE('MY INPUT IS : ' V_NUMBER);
END;
Named PL/SQL block
Here is an example named PL/SQL block. Here the name of the procedure is pl.
Read: How to write Lookup query in PL/SQL
create or replace PROCEDURE pl(aiv_text in varchar2 )
is
begin
DBMS_OUTPUT.put_line(aiv_text);
end;
/
execute pl('my input srini');
drop procedure pl;
Here are key takeout
- The differences in the named PL/SQL block are it has the syntax of ‘CREATE or REPLACE PROCEDURE’ and IS.
- Variables are declared inside after the procedure name.
- The execute the command you can use to call the procedure. The drop procedure command you can use to drop it.
Read: How to write UDF to check input value is number or not
Recent posts
-
Ingesting Data from AWS S3 into Databricks with Auto Loader: Building a Medallion Architecture
In this blog post, we will explore how to seamlessly ingest data from Amazon S3 into Databricks using Auto Loader. We will also discuss performing transformations on the data and implementing a Medallion architecture for better management and processing of large datasets. What is the Medallion Architecture? The Medallion architecture is a data modeling pattern…
-
Exploring Databricks Unity Catalog – System Tables and Information _Schema: Use Cases
Databricks Unity Catalog offers a unified governance solution for managing structured data across the Databricks Lakehouse platform. It enables organizations to implement fine-grained access controls, auditing, and monitoring, enhancing data governance and compliance. Key functionalities include centralized metadata management, data discovery, dynamic reporting, and data lineage tracking, optimizing performance and collaboration.






