Finding all db, tables, columns on a server instance

DECLARE @sql as varchar(max)

SET @sql = ‘Select  @@servername DB_SERVER, NULL AS DB_NAME, NULL AS Table_name, NULL AS column_name, NULL AS data_type, NULL AS max_length, NULL AS is_nullable ‘

SELECT @sql=@sql+’ UNION ALL select @@servername DB_SERVER, ”’ + name + ”’ AS DB_Name
, tab.name COLLATE DATABASE_DEFAULT AS table_name
, col.name COLLATE DATABASE_DEFAULT AS column_name
, typ.name COLLATE DATABASE_DEFAULT AS data_type
, col.max_length
, col.is_nullable
FROM ‘ + name + ‘.sys.tables tab
INNER JOIN ‘ + name + ‘.sys.columns col on col.object_id = tab.object_id
INNER JOIN ‘ + name + ‘.sys.types typ on col.user_type_id = typ.user_type_id ‘ +char(10)
FROM sys.databases
WHERE name NOT IN (‘ReportServer’,’ReportServerTempDB’,’tempdb’,’master’,’model’,’msdb’)
and has_dbaccess(name)=1
EXEC(@sql)

CDC Enabled Job list

DECLARE @sql as varchar(max)

SET @sql = ‘Select @@SERVERNAME DB_SERVER, NULL AS DB_NAME, NULL AS Table_name, NULL AS CDC_table_name, Null As Create_Date’

SELECT @sql=@sql+’ UNION ALL select @@SERVERNAME DB_SERVER, ”’ + name + ”’ AS DB_Name, sys.name COLLATE DATABASE_DEFAULT AS table_name
, cdc.capture_instance + ”_CT” as CDC_table_name
,cdc.Create_Date

FROM ‘ + name + ‘.sys.tables sys(NOLOCK) JOIN ‘ + name + ‘.cdc.change_tables cdc (NOLOCK) on sys.object_id = cdc.source_object_id’ +char(10)
FROM sys.databases
WHERE is_cdc_enabled = 1

EXEC (@sql)

DG readiness check

Adoption Services Prep Checklist

The intent of this checklist is to gather the required assets, knowledge, and resources necessary to have a successful engagement and outcome. Customers may have all, some, or none of these assets and roles listed. By filling out the checklist and sending what a customer does have, it will help the Enterprise Adoption Services team understand the customers maturity and baseline understanding for the engagement.

 

Assets

N Item/Asset Description Response
1 Current state overview –          What is the goal of the Program?

–          What are the primary areas you are challenged with?

–          What are the current challenges/problems?

–          What is the current progress/achievements?

 
2 Data strategy and/or Program Charter document –          Overall charter and/or strategy on how Collibra is being used  
3 Program Roadmap –          The list of the use cases with business problems, expected value, and success criteria defined

–          Roadmap of prioritized use cases with key milestones and capabilities required to ensure those use cases are successful

 
4 Data Office or some form of Data Strategy Team –          Data Office structure – Org. chart, roles/responsibilities and collaboration/communication processes within the team

–          Overview of what this group of resources is tasked with

 
5 Playbooks or Services for use cases deployed –          For existing use cases/processes/services, how was the organization successful in deploying and ensuring adoption/value?

–          Any artifacts related to those objectives (e.g. playbook for developing new use cases; strategies for engaging the business; etc.)

 
6 Education, Communication and Promotion Plans –          Is there a communication plans defined?

–          Is there a promotion plan defined?

–          Is there a (role-based) training path defined?

 
7 Solution Architecture –          Do you have both a current state and a future state architecture/visio to share about how the system has/will be implemented?  
8 Adoption and/or Value Measurements –          How are you measuring adoption and platform use today?

–          Where are you tracking those metrics and how are you using them as part of measuring the value of the use case?

–          Do you have a framework for measuring the use case value?

 
9 Business Operating Model –          For what’s been deployed in production, how are you approaching for ongoing maintenance, curation, and adding new information to your Collibra instance?  
10 Security, Privacy and Access –          What processes and controls do you have today on ensuring the right people have access to the right assets?

–          Which change management processes to keep the Operating Model consistent do you have?

 

 

Calc table storage for SQL Server

DECLARE @sql as varchar(max)
SET @sql =’
SELECT
@@SERVERNAME DBServer,
NULL AS DBName,
NULL AS TableName,
NULL AS SchemaName,
0 rows,
0.00 AS TotalSpaceKB,
0.00 AS TotalSpaceMB,
0.00 AS UsedSpaceKB,
0.00 AS UsedSpaceMB,
0.00 AS UnusedSpaceKB,
0.00 AS UnusedSpaceMB ‘

SELECT @sql=@sql+’ UNION ALL
SELECT
@@SERVERNAME DBServer
, ”’ + name + ”’ COLLATE DATABASE_DEFAULT AS DBName,
t.NAME COLLATE DATABASE_DEFAULT AS TableName,
s.Name COLLATE DATABASE_DEFAULT AS SchemaName,
p.rows,
SUM(a.total_pages) * 8 AS TotalSpaceKB,
CAST(ROUND(((SUM(a.total_pages) * 8) / 1024.00), 2) AS NUMERIC(36, 2)) AS TotalSpaceMB,
SUM(a.used_pages) * 8 AS UsedSpaceKB,
CAST(ROUND(((SUM(a.used_pages) * 8) / 1024.00), 2) AS NUMERIC(36, 2)) AS UsedSpaceMB,
(SUM(a.total_pages) – SUM(a.used_pages)) * 8 AS UnusedSpaceKB,
CAST(ROUND(((SUM(a.total_pages) – SUM(a.used_pages)) * 8) / 1024.00, 2) AS NUMERIC(36, 2)) AS UnusedSpaceMB
FROM
‘ + name + ‘.sys.tables t
INNER JOIN
‘ + name + ‘.sys.indexes i ON t.OBJECT_ID = i.object_id
INNER JOIN
‘ + name + ‘.sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN
‘ + name + ‘.sys.allocation_units a ON p.partition_id = a.container_id
LEFT OUTER JOIN
‘ + name + ‘.sys.schemas s ON t.schema_id = s.schema_id
WHERE
t.NAME NOT LIKE ”dt%”
AND t.is_ms_shipped = 0
AND i.OBJECT_ID > 255
GROUP BY
t.Name, s.Name, p.Rows
‘ + char(10)
FROM sys.databases
WHERE name NOT IN (‘ReportServer’,’ReportServerTempDB’,’tempdb’,’master’,’model’,’msdb’)
and has_dbaccess(name)=1

EXEC(@sql)

Shiny How to Write Helloworld R example

library(shiny)
library(DT)
library(RJDBC)
server <- function(input, output, session) {
if (Sys.getenv(“SHINY_ACCESS_LOGDIR”) != “”) {
access_file <- file.path(Sys.getenv(“SHINY_ACCESS_LOGDIR”),
paste0(basename(getwd()), “-“, basename(dirname(getwd())), “.log”))
cat(paste(“[“, format(Sys.time(), tz = “America/Chicago”), “] -“, session$user, “\n”),
file = access_file, append = TRUE)
}
vals <- reactiveValues(user = “”, pwd = “”, data = NULL)

observeEvent(input$query, {
showModal(
modalDialog(
textAreaInput(“sSQL”, “Type your query here:”, “select * from SqlAgentJob_RunToday “, width = “500px”, height = “50px”)
,
footer = tagList(
modalButton(“Cancel”),
actionButton(“QueryOK”, “OK”)
)

)
)
})
observeEvent(input$userpwd, {
showModal(
modalDialog(
selectInput(“vdb”,label = h3(“Select VDB:”), choices = c(“demo” = “demo”,”DV_Reinsurance” = “DV_Reinsurance”,”DA” = “DA”,”Claims” = “Claims”
,”DW” = “DW”,”TDV_EMOM” = “TDV_EMOM”,”TDV_Reports” = “TDV_Reports”,”TDV_PREM” = “TDV_PREM”)),
textInput(“user”, “Enter username”),
passwordInput(“pwd”, “Enter password”),
radioButtons(“loginType”, “Login Type:”,
c(“AD Authentication” = “AD”,
“Application (Composite)” = “App”)),
footer = tagList(
modalButton(“Cancel”),
actionButton(“ok”, “OK”)
)
)
)
})

observeEvent(input$ok, {
vals$vdb = input$vdb
vals$user = input$user
vals$pwd = input$pwd
vals$loginType = input$loginType
removeModal()
})

observeEvent(input$QueryOK, {
vals$sSQL = input$sSQL

removeModal()
})

observeEvent(input$get_data, {

jdbcUrl <-“jdbc:compositesw:dbapi@tdvhost01:9401?domain=composite&dataSource=”
if(vals$loginType ==”AD”)
{
jdbcUrl <-“jdbc:compositesw:dbapi@tdvhost01:9401?domain=xyz.com&dataSource=”

}

con <- try(
dbConnect(JDBC(driverClass = “cs.jdbc.driver.CompositeDriverx”,
classPath = “csjdbc.jar”),
paste0(jdbcUrl,vals$vdb),
user = vals$user, password = vals$pwd)
)
sSQL<-“select * from TDV.TDV_SOURCE.TDV_DB_LIST”

sSQL<- vals$sSQL

if (!inherits(con, “try-error”)) {
#res <- try(dbGetQuery(con, paste(“select * from”, input$table)))
res <- try(dbGetQuery(con, paste(sSQL)))
#res <- try(dbGetQuery(con, “Select top 10 * from DataSys.TibcoDV.TDV_Access_Sessions”))
if (!inherits(res, “try-error”)) {
vals$data <- res
}
}
})
output$result_dt <- DT::renderDataTable({
req(vals$data)
datatable(vals$data,
rownames = FALSE,
options = list(scrollX = TRUE, pageLength = 20))
})

output$dl_result <- renderUI({
req(vals$data)
downloadButton(“download_data”, “Download csv file”)
})

output$download_data <- downloadHandler(
filename = function() { “data.csv” },
content = function(file) {
write.csv(vals$data, file, row.names = FALSE)
}
)
}

Shiny Server Installation Process 2

How to install and config a shiny server on CentOS VM?

#From a centos VM (7.x)

 

sudo yum install R

sudo su –c “R -e \”install.packages(‘shiny’, repos=’https://cran.rstudio.com/’)\””

 

 

wget https://download3.rstudio.org/centos6.3/x86_64/shiny-server-1.5.9.923-x86_64.rpm

 

sudo yum install –nogpgcheck shiny-server-1.5.9.923-x86_64.rpm

 

vi /etc/shiny-server/shiny-server.config

 

# change line

 

listen 3838;

 

#to

 

listen 3838 0.0.0.0;

 

#to allow remote access

 

#open port for access

 

firewall-cmd –zone=public –add-port=3838/tcp –permanent

 

firewall-cmd –reload

 

#start the server

systemctl start shiny-server

 

# check the status of server open browser and type in

 

http://10.206.97.55:3838/

 

#where you need to put in your VM IP

Shiny Server installation Process

sudo yum install R
sudo su –c “R -e \”install.packages(‘shiny’, repos=’https://cran.rstudio.com/’)\””

wget https://download3.rstudio.org/centos6.3/x86_64/shiny-server-1.5.9.923-x86_64.rpm

vi /etc/shiny-server/shiny-server.config

change

listen 3838;
to listen 3838 0.0.0.0;

firewall-cmd –zone=public –add-port=3838/tcp –permanent

firewall-cmd –reload

systemctl start shiny-server

open browser and type in

http://10.206.97.55:3838/

where you need to put in your VM IP

Python SF upsert example 2

from salesforce_bulkipy import SalesforceBulkipy

bulk = SalesforceBulkipy(username=”dummy@xyz.com”, password=”P@ssme99#”, security_token=”abY55tLsHyUK6T8k4IOFcoCl”)

records_to_upsert = [{“Quote_Number__c:””, “Name”:””,”StageName”:””, “CloseDate”:””}, {‘Quote_Number__c, Name,StageName, CloseDate’}] # A list of A Custom Object dict

# Bulk Upsert
query = ” # SOQL Query
job = bulk.create_upsert_job(“Opportunity”, external_id_name=”Quote_Number__c”, contentType=’CSV’)
csv_iter = CsvDictsAdapter(iter(records_to_upsert))
batch = bulk.post_bulk_batch(job, csv_iter)
bulk.wait_for_batch(job, batch)
bulk.close_job(job)

#SalesforceBulkipy + post_bulk_batch

How to do a bulk update in Python for SF

import csv
from salesforce_bulk import SalesforceBulk
from salesforce_bulk import CsvDictsAdapter
try:
from urllib.parse import urlparse
except ImportError:
from urlparse import urlparse

bulk = SalesforceBulk(username=”dummy@xyz.com”, password=”Password”, security_token=”abY55tLsHyUK6T8k4IOFcoCl”, sandbox=”True” )

job = bulk.create_upsert_job(“Opportunity”, external_id_name=”Quote_Number__c”, contentType=’CSV’)
reader = csv.DictReader(open(‘update2.csv’))
disbursals = []
for row in reader:
disbursals.append(row)

csv_iter = CsvDictsAdapter(iter(disbursals))
batch = bulk.post_batch(job, csv_iter)
bulk.wait_for_batch(job, batch)
bulk.close_job(job)
print(“Done. Data Uploaded.”)